- Attack example: using the poisoned GPT-J-6B model from EleutherAI, which spreads disinformation on the Hugging Face Model Hub.
- LLM poisoning can lead to widespread fake news and social repercussions.
- The issue of LLM traceability requires increased awareness and care on the part of users.
- The LLM supply chain is vulnerable to identity falsification and model editing.
- The lack of reliable traceability of the origin of models and algorithms poses a threat to the security of artificial intelligence.
- Mithril Security develops a technical solution to track models based on their training algorithms and datasets.
You must log in or # to comment.
Breaking news: people can lie on the internet