Skip to main content
Entrepreneur

AI Could Ruin Your Life or Business — Unless You Take These Critical Steps

By December 18, 2024No Comments

(This column originally appeared in Entrepreneur)

Key Takeaways

  • Your concern — at least for now — isn’t about violating any of the few AI laws that exist. Your concern is that AI technology in your company is misused by your employees — willingly or not — and creates potential liabilities that could challenge your business.

AI regulations are starting to grow — but not fast enough.

As far back as 2019, the Trump Administration issued an executive order to maintain the U.S.’s lead in AI technology by pledging that the government would promote and enhance AI resources. In 2022, the Biden Administration issued its “Blueprint for an AI Bill of Rights” to encourage organizations to develop safe, effective, private and non-discriminatory systems. A year later, an Executive Order further reinforced the standards set out in the original 2022 blueprint.

This year, the Department of Labor issued guidance to help employers use AI technology in non-biased employment decisions in order to avoid non-compliance with equal employment opportunity laws. Also this year, the National Institute of Standards and Technology issued an “AI Risk Management Framework” to help organizations build “trustworthy” and “responsible” AI systems.

Some states, like UtahTennessee and Massachusetts, issued formal legislation during this past year that created task forces, protected artists and mandated local businesses to reduce the risk of the misuse of AI by imposing penalties for creating “deep fakes” (images or misrepresentations of actual people) and not disclosing the use of AI technology in their customer and employee interactions. Other states, like California, failed to pass similar bills.

Related: Balancing AI Innovation with Ethical Oversight

What does this mean for most businesses? Not much yet. The federal government’s announcements are merely ineffective and toothless rules and guidelines that are near-to-impossible to enforce. And the laws passed by the states mentioned do little to reign in the misuse of AI.

If you’re looking for advice, here it is: don’t wait for governments to catch up. You need to take steps to reign in the use of AI in your company. Your concern — at least for now — isn’t about violating any of the few AI laws that exist. Your concern is that AI technology in your company is misused by your employees — willingly or not — and creates potential liabilities that could challenge your business.

For example, what if a worker, independently and with the best of intentions, decides to try an AI app that promises to automate their tasks but instead skips over important steps in your quality process? Or uses data incorrectly by sending out unauthorized emails or indiscriminately approving a sales or purchase order? Worse yet, what if an untested AI application deletes your data or exposes it to the wrong people?

This creates not only operational problems but also exposes your business to potential lawsuits from customers, suppliers or partners if the misuse of their data has occurred.

This is why so many companies are creating AI policies. You should, too. According to a recent survey of more than 330 C-suite executives, approximately 44% said they have policies governing the use of generative AI, an increase from just 10% the year before. I’m betting next year’s responses will be even higher.

AI policies, like AI, can be complex. IT and legal experts I’ve spoken to recommend including things like stating “ethical principles” supported by the company, like fairness, transparency and privacy, and establishing roles and responsibilities internally for governing the use of AI. Others say that a policy should include how a company gathers data and what steps it takes to ensure that the data is secure and private. Many recommend documenting the steps for using for assessing how generative AI products are evaluated and tested, and communicated internally.

Related: How AI Tools Helped My Company Stand Out in a Crowded Market

Fair enough. But really, a good AI policy just answers these five simple questions.

  • What core AI application features are approved for use in the company? I stress singling out “features” because most accounting, CRM, HR and other applications are already building AI capabilities in their products. So, it’s not a matter of what products will be used because AI isn’t really a product. It’s a feature or function of a product. For example, Microsoft Copilot is an added function to Windows and Office. QuickBooks’ AI function is called Intuit Assist. Salesforce has Einstein.
  • What non-core AI applications are allowed? There are countless apps and tools that leverage generative AI to create art and images, do research, review contracts and wordsmith correspondence. They’re easy to find and tempting to use. But none of these should be used unless they’ve been vetted.
  • What departments or teams are allowed to use AI? Is this limited to your marketing group? Your finance team? Your IT group? Ultimately AI will be used throughout your organization. But for now your policy should address only those that are allowed to use it.
  • What functions are allowed for AI? Within those groups, how are they allowed to use AI? E-mail campaigns? Policy drafts? Basic research?
  • Who is in charge of AI at your company? Is it your director of IT? Your VP of operations? Someone — or some team — has to take ownership of what and how AI is used internally, and all uses must be approved by that person.

To be sure, even having a good AI policy won’t guarantee that an employee won’t do something that causes damage. But if that employee did something that was against your policy, then you have protection in case things get legal.

So, where to get started? Yeah, you guessed it: AI. Go to ChatGPT, Claude, CoPilot, Gemini, Grok or any other AI chatbot and prompt it to create an AI policy. Try this prompt, which I “borrowed” from ChatGPT:

“Can you create an AI policy for my business? We are a [industry/type of business] that uses AI for [specific applications or purposes]. Our priorities are [e.g., ethics, compliance, data privacy, transparency, etc.]. Please include [specific elements you’d like, e.g., risk management, accountability, communication guidelines, etc.]. Make it suitable for [audience, e.g., internal teams, external stakeholders, customers].”

Of course, never trust the initial response. Instead, dig deep, get your advisors, experts and counsel to review the policy and then communicate it to your employees. Every three to four months upload that same policy to your friendly neighborhood chatbot and ask it to suggest and incorporate updates based on anything that’s happened since the last time you did that policy.

Don’t wait for the government to protect you from the problems that can result from the misuse of AI. Protect yourself.

Skip to content