4 days ago

Episode 7: The Alignment Problem

There’s a question keeping the scientists up at night. 

 

Are we aligned?

 

You’ve most certainly heard of alignment before. Maybe from an auto mechanic talking about your tires. Maybe you heard your chiropractor mutter something about aligning your spine before cracking your neck. Or maybe you’ve got some core childhood memories of your mother, eyebrows raised, asking “are we aligned?” at the end of a stern talking to.

 

Well, the ‘alignment problem’ as its known in scientific circles probably resembles that last context of stern parenting the best, but with a dash of auto-mechanic and an extra helping of profound existential dread. 

 

The short of it is this: if we are to develop a super-powered artificial intelligence (referred to as AGI) that is not aligned with humanity’s values, wants, and needs; we stand to risk total destruction of the human species. The long and dry of it is this proper definition: “alignment aims to steer AI systems toward a person's or group's intended goals, preferences, or ethical principles. An AI system is considered aligned if it advances the intended objectives. A misaligned AI system pursues unintended objectives.”

 

The alignment problem is often articulated with a story about paper clips. Seemingly benign, the task is given to a super-powered AGI to ‘manufacture as many paper clips as possible’. Given that simple set of instructions, it arguably would inevitably consume all available matter, including human flesh, as means to achieve its end goal to ‘manufacture as many paper clips as possible.’ We should have known it would be Clippy to bring about humanity’s doom in the end. It was always Clippy. The alignment problem was always there as a warning every time we tried to resize an image in Microsoft Word.

 

Anyways. This is a real problem! It’s one that has quite a lot of the brightest minds in the scientific community darkened by deep, urgent concern. It’s quite sensible given the daily yield of new headlines from the rapid acceleration of AI technology; a march of progress propelled by developers whose profit motivations match - perhaps exceed - researcher’s concerns. One technology spanning two communities at the spearhead of human development. One moves at the speed of business growth, the other at the speed of scientific certainty, which leads me to what I believe is the true core of this issue:

 

Alignment is a technology problem second and a culture problem first.

 

How can we build AI to be aligned with humanity when humanity can’t even align with itself?

Comments (0)

To leave or reply to comments, please download free Podbean or

No Comments

Copyright 2025 All rights reserved.

Podcast Powered By Podbean

Version: 20241125