Forget the flashy demos and chatbot hype. While the world’s been distracted by dancing AI avatars and existential dread, a quieter revolution is brewing. Eric Malley, in a recent piece, shines a light on Anthropic and Cohere, two companies arguably doing the real heavy lifting in the AI space. Are they the unsung heroes, or just really good at keeping secrets? Let’s dive in.
The Branding Battlefield vs. The Lab
Malley’s core argument? There’s a vast chasm between ‘AI on the street’ – think slick interfaces and viral tweets – and ‘AI in the lab,’ where the actual, impactful innovation happens. OpenAI and Google grab headlines, sure, but Anthropic and Cohere are allegedly building the infrastructure beneath the surface. It’s the difference between a flashy sports car and the engine that powers it. One gets the attention; the other gets the job done.
Anthropic: Safety First, Ask Questions Later
Anthropic, creators of Claude 2, are reportedly obsessed with AI safety. Their mission, according to Malley, is to build AI that is “helpful, honest, and harmless.” Noble goals, but let’s be honest, “harmless” AI sounds about as exciting as decaf coffee. But the money is there. With billions in funding from Amazon and Google, they’re certainly taking the ethical high ground… or at least, they can afford to. Whether that translates to genuinely safer AI or just really expensive hand-wringing remains to be seen. It’s easy to preach ethics when you’re swimming in venture capital.
Cohere: AI for the Enterprise Overlord
While Anthropic focuses on safety, Cohere is apparently laser-focused on enterprise applications. Forget poetry-writing chatbots; Cohere wants to revolutionize defense, healthcare, and manufacturing. Their approach is allegedly cloud-agnostic and multilingual, meaning they’re building AI that can be deployed anywhere and understand everyone. It’s the AI equivalent of a Swiss Army knife, if Swiss Army knives were incredibly complex and potentially world-altering. Collaborations with McKinsey and Palantir suggest they’re serious about getting AI into the hands of the people (or rather, corporations) who can actually use it.
Spherical Philosophy™: Because Why Not?
Malleys introduces his ‘Spherical Philosophy™’, which is conveniently useful for evaluating how good his chosen AI companies are. It relies on interdependence, adaptability, and ethical responsibility. Basically, be nice, work together and don’t destroy the world.
- Interdependence Drives Innovation: As seen in Anthropic and Cohere’s collaborations with tech and consulting giants.
- Adaptability Meets Practicality: Understanding the split between lab-based breakthroughs and consumer-facing tools is key to leveraging AI’s full potential.
- Ethical Guardrails:
- Transparency – AI must be interpretable by humans.
- Accountability – Developers must own responsibility for unintended consequences.
- Equity – Benefits of AI should be distributed fairly across society.
The 2030 Prediction (aka Wishful Thinking?)
Malley boldly predicts that by 2030, Anthropic and Cohere will have redefined how the U.S. interacts with AI. Think enhanced industry applications, regulatory leadership, and economic growth. It’s a rosy picture, to say the least. Will AI solve all our problems by 2030? Probably not. Will it make some industries more efficient (and potentially eliminate some jobs)? Almost certainly. The real question is whether we’ll be ready for the changes.
The Verdict: Hype or Substance?
Are Anthropic and Cohere the hidden forces shaping the future of AI? It’s tough to say definitively. Malley’s article paints a compelling picture of two companies focused on substance over style. But, like any good magician, AI companies are masters of illusion. The true test will be whether their technology lives up to the hype, and whether we, as a society, can navigate the ethical and societal implications of their work. One thing is certain: the AI revolution is far from over, and the real battles are being fought behind the scenes, not on Twitter.
Leave a Reply