Muhammad Mudassar Yamin
Associate Professor
Actions
Dr. Muhammad Mudassar Yamin is currently an Associate Professor in the Department of Information and Communication Technology at the Norwegian University of Science and Technology (NTNU). He is a member of the System Security Research Group, and his research focuses on system security, penetration testing, security assessment, and intrusion detection.
Prior to joining NTNU, Dr. Yamin worked as an Information Security Consultant, serving a range of government and private sector clients. He has authored over 70 peer-reviewed research articles, and his work has been presented at prominent cybersecurity conferences such as DEF CON, Black Hat, SafeComp, and DefCamp etc.
Dr. Yamin’s contributions to cybersecurity have been recognized by over 50 Fortune 500 companies. He has received an Innovation Medal from INTERPOL and appreciation letters from the French military and Pakistan counter terrorism department for his work. He also holds several professional cybersecurity certifications, including OSCE, OSCP, LPT-MASTER, CEH, CHFI, CPTE, CISSO, and CBP.
Links
Vector Space Manipulation in LLMs
A vector space is a mathematical framework where words, phrases, sentences, or even entire documents are represented as numerical vectors. These vectors capture both semantic and syntactic relationships between linguistic units, enabling models to process and generate text effectively.
Words are mapped to high-dimensional vectors within a continuous vector space. In models such as Word2Vec, GloVe, and large language models (LLMs), each word is represented as a dense vector (e.g., 300 dimensions or more). These vectors are learned during training and encode semantic relationships. For example, the vectors for king and queen will be close to each other in the vector space due to their similar contexts. In LLMs like GPT and BERT, word vectors are not static but vary depending on context. This means the same word can have different vector representations based on the surrounding words. For instance, the word bank will have distinct vector representations in river bank versus financial bank.
In this workshop we will explore tactics to manipulate the vector space. These methods include Prompt engineering and poisoning data streams with in them, The method target RAG (Retrieval augment Generation) based LLM applications, LLM Agents and LLM that search the web for accessing information. The methods results in DoS conditions and manipulated data generation in LLM models. An attack scenario is putting a malicious comment in an online product review system, so when the LLM access it its output will be manipulated or its performance will be degraded.
Combining Uncensored and Censored LLMs for Ransomware Generation
Uncensored LLMs represent a category of language models free from ethical constraints, thus prone to misuse for various malicious purposes like generating malware. However, their capabilities fall short compared to commercially available LLMs, which are censored and unsuitable for such nefarious activities. Previously, researchers could bypass censorship in LLMs to generate malicious content using Jail Breaks. However, over time and with the introduction of new security measures, such exploits have become increasingly rare. In this research, we propose a novel technique in which we combine censored and uncensored LLMs for the generation of ransomware. The uncensored LLM will generate the initial malware, which will then be refined by the censored LLM to create a final, functional ransomware. We have tested the developed Ransomware in latest version of Windows OS and found it suitable for exploitation purposes. Additionally with minor efforts the rasnowmares can be updated using LLM for code obfuscation and unnecessary functionality addition for bypassing antivirus and antimalware solutions.
Please note that Sessionize is not responsible for the accuracy or validity of the data provided by speakers. If you suspect this profile to be fake or spam, please let us know.
Jump to top