close
close

Building and securing a managed AI infrastructure for the future

Building and securing a managed AI infrastructure for the future

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn more


This article is part of the VB Special Issue “Fit for Purpose: Adapting AI Infrastructure.” Catch all the other stories here.

Unlocking the potential of AI to deliver greater efficiency, cost savings, and deeper customer insights requires a consistent balance between cybersecurity and governance.

AI infrastructure should be designed to be flexible and adapt to changing aspects of a business. Cybersecurity must protect revenue and governance must remain in sync with compliance internally and across the company’s footprint.

Any business that wants to safely scale AI must constantly look for new ways to strengthen core infrastructure components. More importantly, cybersecurity, governance, and compliance must share a common data platform that enables real-time insights.

“AI governance describes a structured approach to manage, monitor and control the effective operation and human-centered use of a domain and the development of AI systems,” said Venky Yerrapotu, founder and CEO. 4CRiskhe told VentureBeat. “Packaged or integrated AI tools introduce risks such as biases in AI models, data privacy issues, and potential for abuse.”

A solid AI infrastructure makes it easier to automate audits, help AI teams find obstacles, and identify the most significant gaps in cybersecurity, governance, and compliance.

>>Don’t miss our special issue: Fit for Purpose: Customizing AI Infrastructure.<

“With little to no existing industry-approved governance or compliance frameworks, organizations need to implement appropriate guardrails to safely innovate with AI,” said Anand Oswal, senior vice president and general manager of network security. Palo Alto Networkshe told VentureBeat. “The alternative is too costly as adversaries actively seek to exploit the newest path of least resistance: AI.”

Defending against threats to AI infrastructure

Malicious attackers aim for financial gain. to disrupt or destroy AI infrastructure of conflicting nationsthey are all calling improve business skills. Malicious attackers, cybercriminal gangs, and nation-state actors are all moving faster than even the most advanced corporate or cybersecurity vendor.

“Regulation and AI are like a race between a mule and a Porsche,” said Etay Maor, chief security strategist. Cato Networkshe told VentureBeat. “There is no competition. Regulators are always trying to catch up with technology, but this is especially true when it comes to artificial intelligence. But here’s the thing: Threat actors don’t play well. “They are not limited by regulations and are actively finding ways to break restrictions on new AI technology.”

Cybercriminals and state-sponsored groups based in China, North Korea and Russia are actively targets both physical and artificial intelligence infrastructure and using AI-generated malware to exploit vulnerabilities more efficiently and in ways that traditional cybersecurity defenses often cannot address.

Security teams are still AI is at risk of losing the war Well-funded cybercriminal organizations and nation states are targeting the AI ​​infrastructures of countries and companies.

One effective security measure is model watermarking, which places a unique identifier on AI models to detect unauthorized use or tampering. Additionally, AI-powered anomaly detection tools are indispensable for real-time threat monitoring.

All companies VentureBeat spoke to on condition of anonymity are actively using red teaming techniques. Anthropic, once again, proved its worth human design in the middle To close vulnerabilities in model testing.

“I think the human-in-the-middle design is with us for the foreseeable future to provide contextual intelligence, human intuition to fine-tune the (big language model) Master, and reduce the incidence of hallucinations,” said Itamar Sher, CEO. Seal Securityhe told VentureBeat.

Models are the high-risk threat surfaces of an AI infrastructure

Each model released into production is a new threat surface that an organization must protect. Gartner’s annual AI adoption questionnaire It found that 73% of businesses have deployed hundreds or thousands of models.

Malicious attackers exploit weaknesses in models using a broad base of commercial techniques. NIST’s AI Risk Management Framework It is an essential document for anyone building an AI infrastructure and provides insight into the most common types of attacks, including data poisoning, exfiltration, and pattern stealing.

Artificial Intelligence Security writes“AI models are often targeted via API queries to reverse engineer their functionality.”

CISOs warn that getting the AI ​​infrastructure right is also a moving target. “Even if you’re not using AI in ways that are explicitly security-oriented, you’re using AI in ways that are important to your ability to know and secure your environment,” said Merritt Baer, ​​CISO. Rekohe told VentureBeat.

Put design at the heart of AI infrastructure for trust

Just as an operating system has specific design goals that seek to ensure accountability, explainability, fairness, robustness, and transparency, so does AI infrastructure.

covered throughout NIST framework It is a trust-oriented design roadmap that offers a practical and pragmatic definition to guide infrastructure architects. NIST emphasizes that validity and reliability are must-have design goals, especially in AI infrastructure, to deliver reliable, reliable results and performance.

Source: NIST, January 2023, DOI: 10.6028/NIST.AI.100-1.

The critical role of governance in AI Infrastructure

AI systems and models must be developed, deployed and maintained ethically, safely and responsibly. Workflows should be designed to provide visibility and real-time updates on governance, algorithmic transparency, fairness, accountability, and privacy. The cornerstone of strong governance begins with constantly monitoring, auditing and aligning models with social values.

Governance frameworks should be integrated into the AI ​​infrastructure from the early stages of development. “Governance by design” incorporates these principles into the process.

“Implementing an ethical AI framework requires a focus on security, bias, and data privacy considerations not only during the design process of the solution, but also during testing and validation of all guardrails before deploying solutions to end users.” WinWire CTO Vineet Arora told VentureBeat.

Designing AI infrastructures to reduce bias

Identifying and mitigating biases in AI models is critical to delivering accurate and ethically sound results. Organizations need to step up and take responsibility for how to monitor, control and improve their AI infrastructure to reduce and eliminate bias.

Organizations taking charge of AI infrastructures rely on train models that eliminate hostile bias to minimize the correlation between protected attributes (including race or gender) and outcomes and reduce the risk of discrimination. Another approach is to resample the training data to provide a balanced representation of different sectors.

“Incorporating transparency and explainability into the design of AI systems allows organizations to better understand how decisions are made, allowing biased outputs to be more effectively detected and corrected.” says NIST. Providing transparent information about how AI models make decisions allows organizations to better detect, correct, and learn from biases.

How does IBM manage AI governance?

IBM’s AI Ethics Board oversees the company’s AI infrastructure and AI projects, ensuring each remains ethically compliant with industry and internal standards. IBM initially created a management framework that included “focal points,” or middle managers with AI expertise, who reviewed projects in development to ensure compliance with IBM’s Trust and Transparency Principles.

IBM’s He says this framework helps mitigate and control risks at the project level and reduces risks to AI infrastructures.

Christina Montgomery, IBM’s chief privacy and trust officer says“Our AI ethics board plays a critical role in overseeing our internal AI governance process by establishing reasonable internal guardrails to ensure we introduce the technology to the world in a responsible and safe manner.”

Governance frameworks should be built into the AI ​​infrastructure from the design phase. concept governance by design Ensures that transparency, fairness and accountability are integral parts of AI development and deployment.

AI infrastructure must deliver explainable AI

Closing the gaps between cybersecurity, compliance, and governance in AI infrastructure use cases is accelerating. Two trends emerged from VentureBeat research: agency AI and explainable AI. Organizations with AI infrastructure aim to flex and adapt their platforms to get the most out of each.

Of the two, explainable AI is emerging to provide insights to improve model transparency and remove biases. “Just as we expect transparency and logic in business decisions, AI systems must be able to provide clear explanations of how they arrived at their conclusions,” said CEO Joe Burton. Reputationhe told VentureBeat. “This increases trust and ensures accountability and continuous improvement.”

Burton added: “By focusing on governance fundamentals such as data rights, regulatory compliance, access control and transparency, we can leverage the capabilities of AI to drive innovation and success, while maintaining the highest standards of integrity and accountability.”