Sunday, January 24, 2016

Where should existential risk policy entrepreneurs intervene?

The golden path awaits... hopefully
Source: Mac Rebisz





















As I've previously mentioned, ensuring humanity's long-run survival is the most important moral and public policy goal that exists. To that end, figuring out how ordinary individuals can best contribute is an important question that deserves our attention. Unfortunately there are no easy answers.

The first consideration is about issue specificity: should interested people focus on "existential risk" broadly, or identify one risk or topic most accessible to influence? Ultimately the only way to ensure long-run survival is to sustainably colonize space. But this is a complex, far-off and vague goal that's closely aligned with the widespread notions of prosperity, technological advancement, and overall progress.

The strongest argument for focusing on space colonization is that it's inspirational. It's also a little weird, and maybe heartless given Earth's current problems like poverty and freedom. Regardless, since truly getting off-planet will require a massively more advanced economy, the issue gets mixed up with contentious public debates about the role of technology in society and government's interaction with free markets. I suspect that a more narrow focus on instrumental goals with defined outcomes has a higher social return on investment.

Among intermediate goals, wide variation exists, each with unique advantages and disadvantages. The simplest and most well-understood existential risk is probably asteroid death, and indeed much has been accomplished on this front, both public and private. An important question is whether the useful qualities associated with asteroid risk--sexiness, simplicity, an existing channel for public action--makes it worth capitalizing on with further marginal action. It's entirely possible that asteroids' relative success means concerned individuals seeking to maximize their impact should find other issues to work on. Focusing on AI risk, technologies necessary for space life, specific public funding goals, or growth in industries that will support overall progress are all potential intervention points.

80,000 Hours and the effective altruism movement have studied these questions of individual strategy quite a bit, and have converged on a few rough insights (though specific answers would require specific analysis). First, the idea of earning to give should not be underestimated. Instead of trying to build a career around existential risk--for example trying to get a job at CSER--perhaps maximizing one's income to enable large financial contributions to organizations is best. This might be especially true for those lacking the specific technical skills most emphasized by institutions involved with existential risk issues (i.e. math and science).

A second career strategy generally considered to be effective is direct involvement in politics. Since effective altruism and existential risk are relatively new concepts, anyone holding these beliefs that gets elected to public office stands a pretty good chance of being an improvement over their alternative. On the aggregate, if enough like-minded folks run and win various offices and start climbing the ranks, eventually some will be in a position to really have an impact.

For people who want to make a difference on existential risk, but are insufficiently zealous, intellectually capable, or socioeconomically equipped to reorient their entire lives or career path, contributing money to private organizations is a good choice. For those who want to go a little further, however, I see local organizing and advocacy as a potential option.

There's a useful model in political science that describes public policy change as essentially a convergence of three processes, or "streams". First, there needs to be a widely appreciated, understood, and salient problem. Next, a policy solution should be coherent and basically shovel-ready. And lastly, the political climate and incentives must allow action. When the problem, policy and politics all align, a window exists for policy change.

Through this lens, some existential risk issues seem more amenable to amateur help via advocacy than others. AI risk, for example, is a widely-appreciated problem with no current solution. For many environmental issues, the problems and solutions are ready, but the politics remain gridlocked. For most other existential risks, however, there's not nearly enough media visibility and political lobbying.

Local organizing and advocacy--social meetups, educational events, demonstrations etc.--might be a pretty easy way to improve on the status quo. There seems to be a decent amount of money and public interest in doing more to protect ourselves from existential risks, but the social capital simply isn't there yet. Federal lawmakers, for the most part, don't know, don't care, and aren't willing to propose and vote for relevant funding increases. To make progress on existential risk mitigation, I think broadening the stakeholder base outside of its current group of math/science nerds and techno-libertarians is a necessary step towards our ultimate salvation.