Artificial intelligence has moved rapidly from a specialized technological tool to a foundational element of modern business operations. Companies across Ann Arbor and Southeast Michigan increasingly rely on AI-driven systems to enhance productivity, streamline decision-making, personalize services, and analyze vast quantities of data. From healthcare startups leveraging machine learning for diagnostics, to automotive suppliers deploying autonomous systems, to professional services firms experimenting with generative AI, the pace of adoption continues to accelerate. This growth, however, has been accompanied by heightened regulatory attention at both the federal and state levels. Businesses now face the challenge of innovating responsibly while navigating a complex and evolving legal landscape.
Understanding AI regulation is no longer an abstract concern reserved for multinational technology firms. For Ann Arbor companies, many of which operate at the intersection of research, commercialization, and regulated industries, compliance considerations are becoming integral to strategic planning. Federal agencies are asserting oversight through executive action and sector-specific enforcement, while Michigan policymakers are developing their own frameworks to address AI’s impact on consumers, workers, and civil rights. The result is a layered regulatory environment that requires careful interpretation and proactive governance.
This article examines the current and emerging AI regulatory framework affecting Ann Arbor businesses. It explores federal initiatives shaping AI governance, Michigan-specific legislative developments, and the practical implications for local companies. By translating regulatory principles into operational considerations, this discussion aims to help organizations understand their obligations, mitigate risk, and align innovation with legal and ethical expectations.
At the federal level, AI regulation has largely developed through executive action, agency guidance, and the application of existing laws rather than through a comprehensive, standalone statute. This approach reflects both the rapid pace of technological change and the difficulty of crafting rigid rules for systems that continue to evolve. The Biden Administration’s Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence marked a significant milestone by articulating a national strategy for AI governance. While not a law enacted by Congress, the Executive Order has substantial influence over how federal agencies regulate and procure AI technologies.
The Executive Order emphasizes safety, civil rights, consumer protection, and national security, directing agencies to assess AI risks within their jurisdictions and develop corresponding safeguards. For Ann Arbor companies that contract with federal agencies or operate in regulated sectors, this federal posture has tangible consequences. Procurement requirements increasingly demand transparency around AI models, training data, and risk mitigation practices. Companies that cannot demonstrate responsible AI development may find themselves excluded from federal opportunities.
Beyond executive action, federal agencies have relied on existing statutory authority to regulate AI indirectly. The Federal Trade Commission, for example, has asserted that deceptive or unfair AI practices fall squarely within its consumer protection mandate. This includes misleading claims about AI capabilities, discriminatory algorithmic outcomes, and data misuse. Ann Arbor businesses deploying AI-powered consumer products or services must therefore ensure that marketing representations are accurate and that algorithms do not produce unlawful bias.
Similarly, the Equal Employment Opportunity Commission has clarified that AI tools used in hiring, promotion, and workforce management must comply with federal anti-discrimination laws. Automated decision-making systems that disproportionately exclude protected classes can expose employers to liability, even if discrimination was unintentional. For local companies leveraging AI for human resources or workforce analytics, this guidance underscores the importance of validating models, monitoring outcomes, and maintaining human oversight.
Federal AI regulation also varies significantly by industry, a factor particularly relevant to Ann Arbor’s diverse economic ecosystem. Healthcare, automotive technology, finance, and education each face distinct regulatory pressures that shape how AI can be deployed. In healthcare, the Food and Drug Administration continues to refine its approach to software as a medical device, including AI-driven diagnostic tools. Companies developing or deploying AI in clinical contexts must navigate premarket approval processes, post-market monitoring obligations, and data integrity standards.
The automotive sector, which plays a significant role in Michigan’s economy, faces scrutiny from the National Highway Traffic Safety Administration as autonomous and semi-autonomous systems become more prevalent. While fully autonomous vehicles remain limited, advanced driver-assistance systems increasingly rely on AI for perception and decision-making. Companies involved in developing these technologies must ensure compliance with safety standards and be prepared to respond to evolving federal guidance as automation increases.
Financial technology firms operating in or around Ann Arbor encounter oversight from agencies such as the Consumer Financial Protection Bureau and federal banking regulators. AI systems used for credit scoring, fraud detection, or customer profiling must adhere to fairness, transparency, and explainability principles. The inability to explain automated decisions can itself become a regulatory concern, particularly where adverse consumer outcomes are involved.
Educational technology and research institutions also face unique considerations, particularly when AI systems interact with student data or academic assessment. Federal privacy laws such as the Family Educational Rights and Privacy Act may apply, requiring careful data governance and consent practices. For Ann Arbor companies collaborating with universities or developing AI tools for education, compliance with these frameworks is essential.
While federal initiatives provide overarching guidance, Michigan is increasingly asserting its own regulatory interests in artificial intelligence. The state’s approach reflects both its industrial heritage and its desire to position itself as a leader in responsible innovation. Rather than enacting sweeping AI legislation, Michigan has focused on targeted measures addressing specific risks, such as deepfakes, data privacy, and algorithmic transparency.
One of the most notable developments has been Michigan’s response to AI-generated deceptive media. As generative AI tools make it easier to create realistic synthetic audio and video, lawmakers have expressed concern about misinformation, fraud, and election interference. Legislation addressing the misuse of deepfake technology, particularly in political and commercial contexts, signals the state’s willingness to regulate AI where it poses clear societal harm. For Ann Arbor companies developing or using generative AI, these measures highlight the importance of content provenance, disclosure, and ethical safeguards.
Michigan has also engaged in broader discussions around data privacy and consumer protection, areas closely intertwined with AI regulation. While the state does not yet have a comprehensive privacy statute equivalent to those in California or Virginia, proposed legislation and regulatory commentary suggest a growing focus on how personal data is collected, processed, and used by automated systems. Companies operating in Michigan must therefore anticipate heightened scrutiny of data practices that underpin AI models.
At the same time, Michigan policymakers have emphasized the importance of supporting innovation and economic growth. The state’s regulatory posture seeks to balance risk mitigation with the promotion of AI research and commercialization. This balancing act is particularly relevant to Ann Arbor, where startups, research institutions, and established companies coexist within a collaborative ecosystem. Regulatory uncertainty, if not managed carefully, could discourage investment or slow technological progress.
For Ann Arbor businesses, one of the most challenging aspects of AI regulation is navigating the interplay between federal and state requirements. While federal law often preempts state law in certain areas, many aspects of AI governance fall into shared or ambiguous jurisdictional territory. This creates a compliance environment in which companies must reconcile overlapping standards and evolving expectations.
In practice, federal guidance often sets a baseline for responsible AI practices, while state initiatives add context-specific obligations. For example, a company deploying an AI-powered hiring tool may need to comply with federal anti-discrimination laws enforced by the EEOC while also adhering to Michigan-specific labor or privacy requirements. Failure to account for either layer can result in regulatory exposure.
This layered approach also means that companies cannot rely solely on compliance with one regulatory regime as a safe harbor. An AI system deemed acceptable under federal guidelines may still raise concerns under state law if it implicates consumer deception or privacy rights. Conversely, state-level compliance does not insulate a company from federal enforcement if broader statutory obligations are violated.
For businesses operating across state lines, the complexity increases further. Ann Arbor companies that deploy AI products nationally must consider how Michigan’s approach aligns or conflicts with other states’ regulations. Developing adaptable governance frameworks that can accommodate multiple jurisdictions is therefore becoming a strategic necessity rather than a luxury.
Against this regulatory backdrop, effective AI governance has emerged as a critical business function. Ann Arbor companies cannot afford to treat compliance as an afterthought or delegate it solely to legal departments. Instead, responsible AI deployment requires collaboration among legal, technical, and executive leadership.
One of the most important steps is conducting thorough risk assessments before deploying AI systems. These assessments should examine not only technical performance but also potential legal, ethical, and societal impacts. Understanding how an AI model might affect consumers, employees, or partners helps organizations anticipate regulatory concerns and address them proactively.
Transparency is another key component of effective governance. Regulators increasingly expect companies to understand and explain how their AI systems function, particularly when automated decisions have material consequences. While full technical disclosure may not always be feasible or required, organizations should be able to articulate the purpose, limitations, and safeguards associated with their AI tools.
Human oversight remains a central theme in both federal and state guidance. Automated systems should not operate in isolation, particularly in high-stakes contexts such as employment, healthcare, or finance. Maintaining mechanisms for human review and intervention can mitigate risk and demonstrate a commitment to responsible use.
Data governance also plays a foundational role in AI compliance. Since AI models rely heavily on data, ensuring that data collection, storage, and processing practices comply with applicable laws is essential. This includes obtaining appropriate consent, securing sensitive information, and avoiding the use of biased or unlawfully obtained datasets.
Ann Arbor’s unique position as a hub of research, technology, and entrepreneurship presents both opportunities and challenges in the context of AI regulation. The presence of major academic institutions fosters cutting-edge research and talent development, while a vibrant startup ecosystem accelerates commercialization. At the same time, the close relationship between research and deployment raises questions about accountability and oversight.
Companies emerging from academic environments may be accustomed to experimental freedom, but commercial deployment introduces regulatory obligations that cannot be ignored. Transitioning from research prototypes to market-ready products requires a shift in mindset, particularly around compliance and risk management. Understanding regulatory expectations early in the development process can prevent costly redesigns or enforcement actions later.
Collaboration within the local ecosystem can also support compliance efforts. Industry groups, legal professionals, and academic experts can share best practices and insights, helping companies navigate uncertainty. Engaging with policymakers and regulators through public comment processes or advisory initiatives allows Ann Arbor businesses to contribute to the shaping of AI regulation rather than merely reacting to it.
AI regulation is not static, and Ann Arbor companies must prepare for continued evolution. Federal lawmakers have introduced multiple proposals aimed at establishing comprehensive AI governance frameworks, though consensus remains elusive. As public awareness of AI risks grows, pressure for clearer and more enforceable rules is likely to increase.
At the state level, Michigan may expand its regulatory efforts as AI adoption deepens. Future legislation could address algorithmic accountability, automated decision-making transparency, or sector-specific risks. Companies that have already invested in robust governance structures will be better positioned to adapt to new requirements.
Proactive engagement remains the most effective strategy for navigating this uncertainty. Monitoring regulatory developments, investing in compliance infrastructure, and fostering a culture of responsible innovation can transform regulation from a constraint into a competitive advantage. Customers, partners, and investors increasingly value ethical and lawful AI practices, making compliance a component of brand reputation and trust.
Artificial intelligence presents transformative opportunities for Ann Arbor companies, but it also introduces complex legal and regulatory challenges. The interplay between federal initiatives and Michigan-specific measures creates a multifaceted compliance environment that demands careful attention. By understanding the principles underlying AI regulation and integrating them into business strategy, organizations can navigate this landscape with confidence.
Rather than viewing regulation as an obstacle, Ann Arbor businesses have the opportunity to lead by example. Responsible AI deployment not only reduces legal risk but also enhances trust, sustainability, and long-term value. As AI continues to reshape industries, those who align innovation with regulatory and ethical standards will be best positioned to thrive.
Contact Tishkoff
Tishkoff PLC specializes in business law and litigation. For inquiries, contact us at www.tish.law/contact/. & check out Tishkoff PLC’s Website (www.Tish.Law/), eBooks (www.Tish.Law/e-books), Blogs (www.Tish.Law/blog) and References (www.Tish.Law/resources).
Sources
- Executive Office of the President, Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence https://bidenwhitehouse.archives.gov/briefing-room/presidential-actions/2023/10/30/executive-order-on-the-safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence/
- Federal Trade Commission, Business Guidance on Artificial Intelligence and Algorithms https://www.ftc.gov/ai
- Equal Employment Opportunity Commission, Technical Assistance on the Use of Artificial Intelligence in Employment Decisions https://www.eeoc.gov/newsroom/us-eeoc-and-us-department-justice-warn-against-disability-discrimination
- Michigan Legislature, Public Acts and Proposed Legislation Addressing Artificial Intelligence and Deepfake Technology https://legislature.mi.gov/documents/2025-2026/billanalysis/Senate/pdf/2025-SFA-4047-F.pdf
- National Institute of Standards and Technology, AI Risk Management Framework https://www.nist.gov/itl/ai-risk-management-framework
