WIBU-SYSTEMS

Perfection in Protection, Licensing, and Security

Adversarial Machine Learning: AI and ML Beware

MarketingWIBU-SYSTEMS AG el 30 de septiembre de 2024 15:50 horas

Artificial Intelligence (AI) and machine learning (ML) technologies are on an accelerated trajectory, finding their way globally into mainstream systems, devices, and critical applications as governments, commercial, and industrial organizations grow increasingly connected. Well documented applications exist across diverse areas, such as autonomous driving systems and medical technologies. However, much like cybersecurity risks inherent in IoT devices and IIoT systems, AI and ML technologies are similarly vulnerable to attacks that can cause dramatic failures and catastrophic consequences.

According to the U.S National Institute of Standards and Technology (NIST), “for all its potentially transformational benefits, generative AI also brings risks that are significantly different from those we see with traditional software.” In January 2024, NIST published details about a type of cyberattack unique to AI systems: adversarial machine learning where attackers can “corrupt” or “poison” data that might be used by AI systems for training, thereby causing those AI systems to malfunction.

Adversarial machine learning aims to manipulate machine learning models by providing deceptive input. These deceiving inputs can cause a machine learning model to malfunction, potentially exposing data or disrupting the function operated by machine learning.

A simple example used in a study conducted by researchers from Princeton, UC Berkely, and Purdue, underlined the potential danger involved in adversarial machine learning on the manipulation of autonomous vehicles. Self-driving vehicles use machine learning models to interpret road signs. Slight modifications to these street signs, such as the placement of a sticker on a yield sign, can cause the machine learning model to malfunction.

The NIST report outlines four major types of attacks: evasion, poisoning, privacy and abuse attacks. It also classifies them according to multiple criteria such as the attacker’s goals and objectives, capabilities, and knowledge.

  • Evasion attacks occur after an AI system is deployed and the attacker attempts to alter an input to change how the system responds to it. As mentioned earlier, examples would include adding markings to stop signs to make an autonomous vehicle misinterpret them as speed limit signs or creating confusing lane markings to make the vehicle veer off the road.
  • Poisoning attacks occur in the training phase by introducing corrupted data e.g., slipping numerous instances of inappropriate language into conversation records, so that a chatbot interprets these instances as common enough parlance to use in its own customer interactions.
  • Privacy attacks occur during deployment and attempts to learn sensitive information about the AI or the data it was trained on in order to misuse it. An adversary can ask a chatbot numerous legitimate questions, and then use the answers to reverse engineer the model so as to find its weak spots — or guess at its sources. Adding undesired examples to those online sources could make the AI behave inappropriately, and making the AI unlearn those specific undesired examples after the fact can be difficult.
  • Abuse attacks involve the insertion of incorrect information into a source, such as a webpage or online document, that an AI then absorbs. Unlike the aforementioned poisoning attacks, abuse attacks attempt to give the AI incorrect pieces of information from a legitimate but compromised source to repurpose the AI system’s intended use.

These types of attacks are most likely just the beginning. No doubt, as AI and machine learning use cases increase, so will the type and scale of attacks on the data.

As a company dedicated to IP protection and data security, safeguarding AI and ML data is high on our list of priorities. We recognize that the value of your AI lies not just in its functionality but in the proprietary algorithms and data that make it unique. In addition to protecting against manipulation of any data or any algorithm used within the machine learning lifecycle, the confidentiality of sensitive data and intellectual property contained in it must also be protected, as the training data could e.g., reveal the inner workings of a component. Even the AI application itself or its underlying data about the relevance of specific training parameters might represent intellectual property in this respect.

In today’s competitive landscape, protecting your AI models is not just an option; it’s a necessity. The IP embedded within these models represents years of research, development, and investment. Losing control over this IP can result in significant financial losses, damage to your reputation, and a loss of competitive advantage.

Wibu-Systems’ CodeMeter family provides a comprehensive suite of tools to protect the IP in the finished AI models. From encryption and licensing to secure deployment and enforcement, our solutions are designed to be both powerful and flexible, allowing seamless integration into your existing workflows.

For example, CodeMeter Protection Suite offers several tools to safeguard both executables and data involved in AI and ML applications. Executables are protected from tampering or reverse engineering well beyond the traditional “security-by-obscurity” mechanisms. Executables or sensitive functions are encrypted using established cryptographic algorithms. In addition, cryptographic methods are utilized to protect the integrity of software and data. Functions and data are decrypted at runtime. Sensitive parts of the code can even be decrypted and executed, and key materials can be securely transferred and stored in secure hardware. This not only keeps the key material secret, but it also prevents their manipulation.

Due to the availability of open-source frameworks, as well as the popularity of the language, AI applications are often written in Python. AxProtector Python protects both the framework code used for training and the data used in the machine learning lifecycle, from manipulation, theft of intellectual property, and unauthorized use. If you would like to know more about protecting Python applications, watch our recorded Webinar, Protecting Python applications the simpler way.

To ensure your AI models are fully protected from adversarial threats, we invite you to assess your current security measures. Take a moment to fill out our brief form and evaluate if safeguarding your AI is a priority for your organization. Start here.

Inicie sesión o regístrese ahora y disfrute de todas las ventajas de una comunidad.

Para obtener toda la funcionalidad del Foro de IndustryArena es necesario iniciar sesión o registrarse. Este proceso es absolutamente gratuito.

Password forgotten?
Solicitud de contacto
Guest Photo
Your message
The controller within the meaning of Art. 4(7) GDPR is: IndustryArena GmbH, Schneiderstr. 6, 40764 Langenfeld, Germany.
You may reach our data protection officer under dataprotection@industryarena.com.

Purpose of processing
We process your personal data concerning the use of the contact form and the communication with the company of the newsroom as well as the transmission of your data to this company in accordance to Art. 6 (1a) GDPR. This constitutes a legitimate interest for us in accordance to Art. 6 (1f) GDPR.

Recipient of the data
Within our organization, those units gain access to your data, which are necessary to fulfil the above purposes.
Personal data will only be transmitted to third parties if this is necessary for the aforementioned purposes or if another legal basis exists. If necessary, we conclude the corresponding data protection agreements with third parties, in particular pursuant to Art. 28 GDPR.

Data storing
Your data will be transmitted to the company of the newsroom for further processing. The period of storing is the duration of the processing of your request by the respective company.

Seleccionar persona de contacto

Newsroom Logo

Opciones de diseño

  • Título Color de fuente:
  • Contenido Color de fondo:
  • Contenido Color de fuente:
  • Navegación Fondo:
  • Ficha Color de fuente:
  • Pestaña activa Color de fuente:
  • Enlace Color de fuente:
  • Enlace activo Color de fuente:
  • Imagen de fondo Color de fondo

    ¿Cómo quieres colocar la imagen de fondo?

    Tenga en cuenta: Los banners y los rascacielos sólo se guardan para el idioma actual. Para otros idiomas, cambia el idioma con el botón de la parte superior derecha.

    Establecer el enlace para la imagen de fondo

  • Gráfico de cabecera

    ¿Cómo desea alinear el banner?

    Tenga en cuenta: Los banners y los rascacielos sólo se guardan para el idioma actual. Para otros idiomas, cambia el idioma con el botón de la parte superior derecha.

    Introduzca el destino del enlace para el banner

  • Skyscraper

    Introduzca el destino del enlace para el Skyscraper

Tenga en cuenta:

Los banners y los rascacielos sólo se guardan para el idioma actual. Para otros idiomas, cambia el idioma con el botón de la parte superior derecha.