March 14, 2018

A Skeleton Argument for a New Statecraft

We are operating on the notion that we might be able to produce a scientific political discipline of vastly higher quality than the current mixture of pre-scientific knowledge and mystical belief. This does not automatically imply that it would be a good thing to do so. Even so, we are making that assumption.

So we ought to justify or at least investigate the reasons that producing a new statecraft will do good things. We’ll start with a skeleton argument of the right general form:

  1. We can produce a new statecraft science of vastly higher capability and precision. (Believed, in need of justification).

  2. If the good effects of an action outweigh its bad effects plus the opportunity cost, and it is possible to do, then we should do that action. (Roughly tautological, but of imprecise form, and philosophically contentious).

  3. The opportunity cost of building a new statecraft is large but not overwhelming. On the order of years of effort by a team of high talent men who would otherwise be doing other big things. (Believed, reasonable given #1, but in need of justification).

  4. The good effects are extremely large, on the scale of major positive civilizational transformation. (Believed, in need of justifcation).

  5. The bad effects are a significant fraction of but covered by the good effects. This claim is not just that the bad effects are smaller, but that they are dependent on the scale of the good effects, such that if the good effects fail to materialize, the bad effects are limited. (Believed, in need of justification)

  6. The good effects outweigh the bad effects plus opportunity costs. (Follows from 3, 4, and 5).

  7. We should build the new statecraft (Follows from 1, 2, and 6).

So we have a valid and believed argument with a lot of unjustified premises. More importantly, we know what we need to know to know that building the new statecraft is a good idea. (Isn’t that a mouthful).

Providing full justifications here isn’t a good use of space, but we will have to do so eventually. We can however clarify the questions to be answered, and allude to our reasons for believing each:

Possibility of Superior Scientific New Statecraft

We have begun discussing this in other posts. It has a few parts.

The core of it is whether or not it’s possible to have hard-scientific knowledge in sociology and statecraft, or whether it’s either inherently squishy, or doomed to never develop scientifically.

A number of possibility objections to scientific statecraft come to mind. For example. These objections, so far as we have looked at them, fall apart on closer inspection, and even yield useful avenues of research and constriants. We will have to continue formally investigating and dispatching these objections.

The upshot of all this is that we believe that science and the scientific method is a real and powerful way to approach knowledge, and that there are no fundamental barriers to scientific statecraft, only incidental and historical barriers, and lack of effort thus far properly applied.

The real proof will be in the pudding, or in this case, a well-justified plan to produce a scientific new statecraft, and promising initial results.

Part of the claim is not just that a scientific statecraft is possible, but that such a thing would be far superior to what we have and would otherwise get. We believe the success of science in engineering is highly suggestive of promise here. The majority of this part of the argument will come from our discssion of the positive impact of scientific new statecraft.

Better Theory of Action

Our theory of action above is agreeable if not taken too precisely, and some version of it is shared by most, but in general we would like to use high assurance methods for especially big critical components like the justification of the entire project. This means a more precision-happy theory of action so we can really dot our ‘i’s and cross our ’t’s.

There are a few points on which to improve our theory of action:

  1. Dealing with decision making under uncertainty. Our above theory of action had no mention of probability or uncertainty or risk or any of that. Explicitly handling these things would be good.

  2. We need to address plans and strategy. An action isn’t just “good” or “bad” in a vacuum. It’s good for some purpose. It fits into some plan to achieve some end. It doesn’t obstruct other elements of some plan.

  3. We need to address predictive theory of action in a psychological sense, ie “if the action is known to meet these criteria, then the person in question will do it”. After all, we are not just making plans in a vacuum; we want ourselves to actually do it. This seems like a pedantic point, but it might not be.

  4. We need to flesh out surrounding economic and praxeological concepts like opportunity cost and so on. These are the core concepts of good planning and decision making.

It may also be that a theory of action cannot be produced which justifies a particular decision in a transferrable way. There may be too much judgement and implicit knowledge involved. Still, we can produce some arguments and document our reasoning, and examine it.

Cost of Building a New Statecraft

We think the seed package of the new statecraft can be achieved by a relatively small team of talented theorists. The installation parts will require more, but are also possible.

This we are less prepared to argue at the current time.

Again the proof will be in the plan. How will this be achieved, and what will it cost?

Addendum: we now have a written version of the surrounding plan which gives context to this. Still we don’t have a plan for developing the statecraft itself.

Large Positive Effect

We have a few rough arguments for why a better statecraft, and an engineering-grade science of social technology, will produce a large positive effect.

Argument by analogy to engineering science, noting the major social problems which we claim are caused by failures of social technology:

  1. The development of engineering science yielded vastly better technology. (Believed, undisputed as far as we know.)

  2. The vastly better technology of the industrial revolution produced a large positive effect on society. (Believed with caveats, generally accepted by most, with some notable dissent)

  3. Social technology is like material technology in relevant aspects. For example it can be engineered better by better science, it enables us to achieve better and more reliable effects at lower cost, it can be used for good. (Believed by us, not generally accepted, in need of justification).

  4. We have major known problems which are best explained as problems of inadequate social technology, which would be solved by vastly better social technology. For example, all the fighting and dysfunction around politics, most social problems, etc. (Believed by us, not too contentious, but in need of justification).

  5. Vastly better social technology would likewise have a very large positive effect on society, possibly even correcting for some of the social-failure-mediated downsides of material technology. (Believed. Supported by 1-4).

Fractional Negative Effects

Why do we believe that the negative effects will be small?

The downside would come from a few things, which we can address individually:

  1. Empowerment of terrorists, rebels, and conflicts, like the downside of nukes or biological weapons. Our general belief is that most of these actors are trying to accomplish some specific good, and are just confused or using bad means. Better means will mean more constructive ways to solve these problems. Further, states and the forces of order will be more empowered by scientific social technology than will terrorists and rebels, so we would expect fewer people to attempt such moves.

  2. Empowerment of big bad actors, like bad states. Social technology of the kind we aim for is explicitly about empowering states to be less bad. This, that an empowered state is a good thing, is a major set of claims we need to justify. One could imagine an evil-dictator type of scenario, where organizing power is much easier than being philosophically wise and benevolent. But it is our belief that most abuse of power is because of conflict and inadaquate techniques and inadequate understanding, which better social technology would tend to solve.

  3. Empowerment of America’s geopolitical rivals, like China. In our opinion, their technology of political order, social engineering, and political conflict is superior to ours, so if anything, we need to be catching up so as not to get outmaneuvered. Since this is written in english and aimed at America, we expect the benefits to accrue to American power more than foreign power.

  4. Negative downside externalities of narrow highly effective social technology. For example, maybe there is some highly effective way to achieve political order at the expense of a society’s ability to see the truth or benefit its own people. We believe these strategies exist all over the place, and are being heavily used, and account for much of our trouble. However we believe that these strategies are themselves symptoms of bad social technology of political and economic organization. These are the problems we wish to solve.

  5. Weird accidental existential risk scenarios, like the possible downside of artificial intelligence. We’re not sure what this would even look like. We look forward to hearing from critics about how we’re going to destroy the world.

In short, we are optimistic, but this optimism depends on the particulars of what we think better social technology will end up looking like. We need to lay out a better model of what the new social technology will look like and what precise effects it will have before being able to argue this more convincingly.


So we need:

  • More arguments and discussion around the nature, possibility, difficulty, and effects of scientific statecraft, and what that would mean.

  • An actual fully developed plan of action (addendum: we’ve written a skeleton version) from which we can estimate possibility and costs.

  • Predictive accounting for the effects of the plan, from which we can estimate positive and negative impact.

Edited and curated by Wolf Tivy

Comments? Email