AI washing is slapping “AI-powered” on a product that isn’t doing anything meaningfully intelligent. Maybe it’s a handful of if-then rules rebranded as “machine learning.” Maybe it’s a basic API call to GPT with no fine-tuning, no custom data, and no differentiation from what anyone could build in an afternoon. Maybe it’s just a keyword search algorithm from 2015 with a fresh coat of marketing paint.
The SEC has already started cracking down on it. In 2024, they fined multiple companies for misleading investors about their AI capabilities. But for every company that gets caught, dozens more are quietly inflating their AI credentials to justify higher valuations, win enterprise contracts, or attract talent.
Why It Exists
Money. AI companies command higher valuations than non-AI companies. Enterprise buyers have AI budgets that need spending. Job candidates want to work on AI. The incentive to claim AI capability — whether or not it exists — is enormous.
The problem is amplified because most buyers can’t evaluate AI claims. When a vendor says their product uses “proprietary AI algorithms,” how do you verify that? If you’re not technical, you can’t. And even technical people struggle to evaluate claims in domains outside their expertise.
Who Should Care
Executives making buying decisions: If you’re evaluating AI vendors, you need a framework for separating real capability from marketing. I’ve reviewed vendor pitches for clients at companies like KeyBank and Sabre, and the gap between claims and reality is frequently enormous.
Investors: If AI capability is a core part of a company’s valuation thesis, verify it. Ask for architecture documentation. Talk to their engineers, not just their sales team. Look at their actual model performance metrics, not their marketing benchmarks.
Founders and product leaders: If you’re tempted to AI-wash your own product, don’t. The short-term benefit of an inflated AI claim is not worth the long-term damage when customers realize the product doesn’t deliver. And with the SEC paying attention, there’s now legal risk too.
Who Shouldn’t Worry
If you’re building products and being honest about your capabilities — including using third-party AI APIs, which is completely legitimate — you’re fine. Using OpenAI’s API in your product doesn’t make you an AI company, but it does make your product AI-enhanced. There’s nothing wrong with that, as long as you describe it accurately.
How to Spot AI Washing
Here’s the framework I use when evaluating AI vendors for clients:
- Ask what happens without the AI. If you turned off the “AI” component, what would the product do? If the answer is “basically the same thing,” it’s AI washing.
- Ask about their training data. Real AI products are built on specific datasets. If a vendor can’t tell you what data their model was trained on, they probably don’t have a model.
- Ask for a failure case. Real AI systems fail in specific, predictable ways. If a vendor claims 99% accuracy and can’t describe when their system gets it wrong, they’re either lying about the accuracy or lying about the AI.
- Ask about their ML team. How many people work on the model? What’s their background? If the answer is “our developers added some AI features,” that tells you the depth of investment.
- Request a technical demo with your data. Marketing demos use cherry-picked examples. Bring your own data and see what happens.
What to Actually Do About It
If you’re evaluating AI tools, bring a technical advisor into the process. Not to gatekeep, but to translate. A fractional CTO or senior engineer who understands AI architecture can ask the three questions that separate real capability from marketing in about 15 minutes.
The Verdict
AI washing is rampant, the SEC is watching, and your best defense is asking questions that marketing decks can’t answer.
Related: AI Strategy for Non-Technical CEOs | Build vs. Buy AI Tools
