defensive tech: Nonlinear Function
Created: November 27, 2023
Modified: December 01, 2023

defensive tech

This page is from my personal notes, and has not been specifically reviewed for public consumption. It might be incomplete, wrong, outdated, or stupid. Caveat lector.

Vitalik Buterin argues for defensive accelerationism (d/acc):

One frame to think about the macro consequences of technology is to look at the balance of defense vs offense. Some technologies make it easier to attack others, in the broad sense of the term: do things that go against their interests, that they feel the need to react to. Others make it easier to defend, and even defend without reliance on large centralized actors.

A defense-favoring world is a better world, for many reasons. First of course is the direct benefit of safety: fewer people die, less economic value gets destroyed, less time is wasted on conflict. What is less appreciated though is that a defense-favoring world makes it easier for healthier, more open and more freedom-respecting forms of governance to thrive.

He goes on to break down candidate technologies into a few categories:

  • physical defense on the macro level (conventional 'defense', resilient infrastructure, etc.)
  • physical defense on the micro level (biodefense, etc)
  • online defense where it is straightforward to agree on who the attacker is (cyber defense)
  • online defense where it's not straightforward to agree on the attacker (information defense)

Perhaps one could add some more dimensions to this, and consider defense in interpersonal relations (e.g., mindsets that cultivate emotional resilience), economic relations (a UBI is in a sense 'defensive tech' for someone's finances), ???

Unfortunately, basic research has a tendency to be dual use. Better understanding of human biology can be used for vaccines and medical treatments, and also for bioweapons. Offensive and defensive technologies really only separate once you're working on specific applications. I tend to think that basic research is still generally good, but it's certainly possible to do what seems like basic research and have it lead to bad outcomes in the immediate term (e.g., for a while, computer vision research was mostly being used to oppress the Uyghurs).