New AI Law Practice Tackles Corporate Governance Crisis

📊 Key Data
  • AI adoption is outpacing policies and contracts, leaving governance gaps tied to data handling, confidentiality, IP ownership, vendor terms, and regulatory readiness.
  • The EU’s AI Act becomes fully applicable in August 2026, setting a global standard.
  • The NIST AI Risk Management Framework is becoming a benchmark for responsible AI development.
🎯 Expert Consensus

Experts agree that proactive AI governance is critical to mitigate legal, reputational, and strategic risks as AI integration accelerates across industries.

about 2 months ago
New AI Law Practice Tackles Corporate Governance Crisis

New AI Law Practice Tackles Corporate Governance Crisis

WARWICK, NY – February 19, 2026 – As artificial intelligence rapidly moves from experimental technology to a core component of daily business operations, a growing chasm is emerging between its adoption and the legal frameworks required to govern it. Addressing this critical need, Wall Street finance partner and corporate strategist Michael S. Baker is expanding his advisory firm, Michael S. Baker, P.C., with a new practice focused exclusively on AI governance: ArtificialIntelligence.Lawyer.

The new practice, operating under the firm’s NYBusiness.Law banner, is designed to help corporate leadership teams navigate the turbulent legal waters of AI integration. It aims to provide practical legal structures for companies embedding AI into everything from internal workflows to customer-facing products, focusing on accountability, documentation, and strategic risk management.

The Widening Governance Gap

The rush to leverage AI for a competitive edge has left many organizations exposed. According to the press release, “AI adoption is outpacing policies and contracts, leaving governance gaps tied to data handling, confidentiality, IP ownership, vendor terms, and regulatory readiness.” This isn't just a hypothetical risk; it's a clear and present danger for businesses operating in an increasingly complex regulatory environment.

Globally, lawmakers are scrambling to catch up. The European Union’s landmark AI Act, which will become fully applicable in August 2026, establishes a comprehensive, risk-based legal framework that is expected to set a global standard. In the United States, a patchwork of state-level initiatives and federal guidance from agencies like the Federal Trade Commission (FTC) creates a daunting compliance landscape. The National Institute of Standards and Technology (NIST) has offered its voluntary AI Risk Management Framework, which is quickly becoming a benchmark for responsible AI development, but its adoption requires dedicated expertise.

For businesses, the pain points are tangible and immediate. Key among them is the “black box” problem, where the decision-making processes of AI systems are opaque, making it difficult to ensure fairness and accountability. This can lead to biased outcomes in hiring or lending, exposing companies to significant legal and reputational damage. Furthermore, the vast amounts of data required to train AI models raise profound privacy and security concerns, especially when proprietary or sensitive customer information is involved. Many leaders face a “visibility gap,” lacking a full map of where AI is being used within their organization and what data is being shared with third-party tools.

Intellectual property is another minefield. The use of large language models trained on vast datasets of copyrighted material has sparked numerous lawsuits, and the legal status of AI-generated content remains ambiguous. Companies risk both infringing on existing IP and inadvertently forfeiting their own trade secrets when employees use external AI platforms without proper safeguards.

A Practical Playbook for a New Frontier

ArtificialIntelligence.Lawyer aims to move companies from a reactive to a proactive stance by offering a practical governance playbook. The practice focuses on implementing tangible controls that align with a company’s specific risk profile and growth objectives. Rather than offering abstract legal theory, the firm’s engagements are designed to produce concrete, defensible governance mechanisms.

This work begins with a thorough assessment to close the visibility gap, identifying where and how AI is already influencing operations. From there, the practice helps leadership teams tighten internal AI usage policies, assign clear roles for ownership and approval, and establish robust documentation standards that can demonstrate diligent oversight to regulators and stakeholders.

A critical component of this service involves scrutinizing and updating commercial agreements. As businesses integrate AI into their supply chains and service offerings, contracts with vendors and platform providers become a major source of risk. The practice assists in renegotiating terms related to data use, liability, and indemnification to ensure they do not expose the company to unforeseen consequences. This is particularly crucial when dealing with AI vendors whose standard terms may not align with a company’s risk tolerance or data protection obligations.

By focusing on these operational details, the firm helps create a workable system of controls that reduces the risk of accidental data disclosure and clarifies expectations around the ownership and licensing of AI-assisted content and code.

A Strategist's Board-Level Approach

Leading this initiative is Michael S. Baker, whose background provides a unique vantage point on the issue. With more than two decades of experience in corporate law, finance, and strategy, including senior roles at major international law firms, Baker approaches AI governance not as a niche IT or compliance issue, but as a fundamental enterprise risk that demands board-level attention.

This perspective treats AI adoption with the same strategic rigor applied to major financial transactions or market-entry decisions. The focus is on accountability, oversight, and real-world consequences, moving beyond technical compliance to build a resilient and responsible corporate strategy for the age of AI. This approach is tailored for leadership teams who understand that failing to govern AI properly is not just a legal failure but a strategic one that could impact long-term viability and shareholder value.

Beyond the Code: An Ethical Framework for Technology

What further distinguishes Baker’s approach is a deep-seated focus on the human and ethical dimensions of technology. This perspective is evident in his work beyond the legal field. He is the author of 4 CORE: The Universal Teachings of Humanity’s Wisdom Traditions, a book that examines the common ethical principles found across the world’s major religions and moral philosophies.

This exploration of universal values like truth, compassion, and responsibility directly informs his view on AI governance. It suggests that effective legal frameworks cannot be built on technical rules alone; they must be grounded in a durable ethical foundation that considers the human impact of automated decisions. This philosophy is also reflected in his work as a credited writer on the 2024 feature film This Too Shall Pass, a project centered on human connection and growth.

This interdisciplinary background—blending high-stakes corporate strategy with a study of core human values—offers a more holistic vision for AI law. It reframes the challenge from merely mitigating risk to actively shaping how new technologies are adopted in a way that aligns with a company’s culture and societal values. As businesses confront the transformative power of artificial intelligence, this integrated approach argues that the most resilient governance strategies will be those that are not only legally sound but also ethically coherent and human-centric.

Theme: Sustainability & Climate Regulation & Compliance Generative AI Artificial Intelligence
Product: AI & Software Platforms
Sector: AI & Machine Learning Financial Services Software & SaaS
Event: Policy Change
UAID: 17098