The DataLab Group gave a presentation on generative AI, focusing on its potential risks and red teaming procedures. Generative AI is becoming a staple technology in multiple industries, yet it still creates various challenges, especially around safety and security. For these reasons, the DataLab Group has developed expertise in the security of LLMs.

Last year, we had already presented our work in this field, particularly on how to attack apps using generative AI, the possibilities for automated detection, and finally, how to build defenses against those attacks. This year, we focused on our recent advances, especially in red teaming of generative AI apps. We built a team that stays up to date on these topics and designs attacks for multiple systems. We also furthered our experiments in automatic risk detection and building efficient guardrails.

We also presented our RAG solution, which helps us perform smart searches within internal document corpora. It helps businesses overcome the limitations of using an LLM because the model might be unaware of internal documents. Hence, we contextualize with our documents so that the answers can be more specific to our own contexts.

Keren Perez and Bassem Derbal presented this internal RAG solution and demonstrated its usefulness to our business experts. Combining this approach with our knowledge of risks and AI only results in better tools.

Additionally, Aldrick Zappellini discussed the strategic integration of generative AI in businesses and met with various participants on topics including digital twins, risks, sovereignty, strategy, and many more.

Given the success of our work, we presented it to many participants at DIMS and were pleased to engage with other speakers and have profound discussions about AI with the audience.