By Chris Hetner, Senior Executive, Board Director, and Leader in Cybersecurity, Former SEC Chair Senior Cybersecurity Advisor; Dominique Shelton Leipzig, Founder & CEO, Global Data Innovation; Steve Roycroft, CEO of RANE; Ali Plucinski, Cyber Analyst, RANE
The technology sector, particularly the rapidly evolving artificial intelligence (AI) industry, is receiving significant attention from the White House, presenting organizations with both new possibilities and potential threats. As a new administration takes shape, substantial changes are anticipated across U.S. federal agencies and legislative agendas. Two critically important areas impacted by these potential shifts are cybersecurity and AI, both vital for national security and effective corporate governance.
Summary
- The U.S. federal government is signaling a move towards deregulation in both cybersecurity and AI sectors.
- Key cybersecurity regulations like CIRCIA, CMMC, and SEC Disclosure Rules may face alterations or reduced enforcement.
- AI development is proceeding rapidly, with new agent models offering advanced reasoning and automation capabilities.
- Boards must enhance their oversight and risk management strategies to navigate this evolving technological and regulatory landscape.
- Proactive governance, compliance maintenance, and cross-functional collaboration are crucial for managing emerging AI and cyber risks.
Cybersecurity: Navigating Regulatory Uncertainty and a Deregulatory Stance
The current administration has indicated a preference for a deregulatory approach, potentially involving workforce adjustments and simplified oversight for agencies such as the Cybersecurity and Infrastructure Security Agency (CISA), the National Security Agency (NSA), and the Federal Bureau of Investigation (FBI).
While specific regulatory changes have not yet been announced, industry experts anticipate that several key regulations could be subject to modification or repeal, including:
- The Cyber Incident Reporting for Critical Infrastructure Act (CIRCIA), which mandates prompt reporting of cybersecurity incidents by entities within critical infrastructure sectors.
- The Cybersecurity Maturity Model Certification (CMMC), a framework developed by the Department of Defense to establish cybersecurity standards for defense contractors.
- The U.S. Securities and Exchange Commission (SEC) Disclosure Rules, established under the previous administration, which require public companies to disclose material cybersecurity incidents and related governance practices.
💡 In the absence of robust federal regulatory enforcement, boards of directors will likely face increased responsibility for governance oversight.
Boards should prioritize maintaining compliance with existing regulations to mitigate the risks of penalties and reputational damage, even as the regulatory environment evolves. 📍 Boards play a critical role in ensuring that compliance frameworks are robust and adaptable to changing circumstances.
AI: Balancing Rapid Innovation with Evolving Oversight Expectations
Regarding artificial intelligence, a similar trend of reducing regulatory barriers for developers and organizations adopting AI technologies is evident. An early action involved rescinding the previous administration’s executive order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence, which had aimed to establish regulatory frameworks for safety reviews and mandatory cybersecurity protocols.
Further steps have been taken to lessen government oversight and compliance mandates for the private sector. This includes discussions around a potential ten-year moratorium on state-level AI legislation, although this specific proposal was not included in the final legislative package.
✅ The future regulatory approach to the AI industry by the administration remains uncertain, adding another layer of complexity for businesses.
Meanwhile, AI innovation continues at an accelerated pace. In September 2024, OpenAI introduced a new model designed for enhanced reasoning capabilities, enabling greater autonomy and automation in user interactions. This development has inspired similar agent models from major technology companies such as Anthropic, Alibaba, DeepSeek, and Google, providing organizations with advanced tools to expedite research, streamline administrative tasks, foster brainstorming, and improve decision-making processes.
⚡ In April 2025, OpenAI further expanded AI capabilities with the release of o3 and o4-mini models, incorporating sophisticated reverse image search functions that push the boundaries of AI application potential.
Navigating AI and Cybersecurity: 5 Essential Governance Practices for Boards
As organizations increasingly integrate AI applications and adapt to evolving cybersecurity mandates, strategic foresight from boards is paramount. The following practices can assist boards in mitigating legal, regulatory, and reputational risks associated with these technological advancements:
- Monitor Regulatory Signals. Given the current uncertainty surrounding U.S. federal policy in the short term, it is crucial to stay informed about updates from key federal bodies like CISA, the Department of Defense, and the SEC. These agencies are likely to provide leading indicators of impending policy shifts.
- Maintain Compliance Diligently. Until formal regulatory changes are enacted, organizations must adhere to existing federal requirements to avoid penalties for noncompliance. In the AI domain, despite the current administration’s inclination against imposing stringent regulations, organizations should remain vigilant about other emerging risks associated with the rapid adoption of AI tools. This includes potential reputational damage stemming from the deployment of insecure or underperforming AI systems.
- Strengthen Cross-Functional Oversight. The risks associated with AI and cybersecurity extend beyond the IT department. Boards should ensure robust coordination among all affected functions, including the C-suite, human resources, cybersecurity teams, and legal departments, to conduct comprehensive risk assessments and respond effectively. 📊 Effective collaboration across departments is key to a holistic risk management strategy.
- Enhance Transparency and Training. Organizations should consider implementing public disclosures for partners and consumers detailing how AI tools are utilized and how personal data is managed. Internally, ongoing employee training is essential to cultivate awareness regarding the various uses and potential risks of AI technologies.
- Take Ownership of Risk Management. Ultimately, boards are responsible for proactively overseeing AI and cybersecurity threats, recognizing their potential to impact business operations, financial performance, and legal standing. 📌 Boards must champion a proactive and accountable approach to risk management in the digital age.
Fundfa Insight
The U.S. shift toward deregulation in cybersecurity and AI presents a dual challenge and opportunity for corporate boards. As federal oversight potentially wanes, the responsibility solidifies with leadership to foster responsible innovation and robust risk management practices, ensuring organizational resilience in a dynamic technological landscape.