AI Security Researcher at ArtoNexa Labs
0
Talks Delivered
0
Events Spoken At
0
Countries Visited
1
Years Speaking
0
Total Talks Given
As we hand more agency to machines, we’re creating identities that can act, but not always be held accountable. I research how to red team and secure autonomous AI systems before that gap becomes systemic risk. My work lives at the intersection of offensive security and the rapid, often untethered growth of artificial intelligence. As an AI/ML Researcher and Red Teamer, I don't just look for bugs; I map the boundaries of autonomous systems to ensure they remain resilient when the unexpected happens. From the intricate layers of LLM pipelines to the hidden vulnerabilities in blockchain and DevSecOps automations, I focus on uncovering risks before they become reality. At DEF CON 33, I had the opportunity to speak on the Policy Track about the legal frameworks for ethical hacking. To me, security is as much about the humans who defend the systems as it is about the code itself. Advocating for global safe harbor standards is a vital part of ensuring that researchers can continue to protect the digital world without fear. I believe that as we hand more agency to machines, our need for intentional, human-centered security only grows. Whether I am simulating a real-world attack on an AI-driven workflow or refining a policy for international safety, my goal is to provide a clean window of clarity in an increasingly complex threat landscape. I am always open to quiet conversations about the offensive side of security, the future of AI resilience, or the ongoing effort of building trust in technology.
Areas of Expertise
Presentation Types
Audience Types