This feels like a timely overview given the latest revelation that xAI's Grok is producing a ton of illegal content, sometimes of minors.
The team at Future of Life Institute recently conducted a safety review of some of the most popular AI tools on the market, including Meta AI, OpenAI's ChatGPT, and Grok.
The review considered six key factors:
- Risk assessment – efforts to ensure that tools cannot be manipulated or used to cause harm
- Current damage – including data security risks and digital watermarking
- Safety framework – the process each platform has to identify and address risks
- Existential safety – whether the project is monitored for unexpected evolutions in programming
- Governance – The company’s lobbying efforts on AI governance and AI safety regulations
- Information sharing – transparency of the system and insight into how it works
Based on these six factors, the report gives each AI project an overall safety score that reflects a broader assessment of how each manages development risks.
The team at Visual Capitalist translated these results into the infographic below. This provides additional food for thought about AI development and where we are going (especially with the White House seeking to remove potential obstacles to AI development).

