Tom Zander
Jan 12, 2017

Bitcoin: the long view.

Bitcoin is an amazing idea and has been running with great success for 8 years. It is far from perfect, though. Many issues will take time to rectify in a good and responsible manner. I want to share my longer time engineering goals for the Bitcoin full node which direct my priorities and you have been able to see some of these already in the Bitcoin Classic full node which released its 1.2 version last week.

1 Protocol documentation

Bitcoin as a whole is something we often call a protocol. But unlike most protocols, there is preciously little documentation out there that describes in detail what actually goes over the wire and what actually is the reason a transaction stores numeric values in 3 different ways.

The first goal is to move towards a Bitcoin that is fully documented. The point here is that the documentation of the protocol is to be 'leading'. So if there are two implementations that disagree, the protocol-documentation is the one that is right. This avoids fluffy arguments like who has the biggest market share or who has been around longest as those somehow being more right.

2 Backwards compatibility of protocol

Parts of the current Bitcoin protocol were designed without keeping in mind industry best practices. For many parts this is not a big deal, but some those best practices were there for a very good reason. A good example is that practically all of the data structures in the Bitcoin protocol are not changeable. It is impossible to add a value to a p2p message, you can't remove an unused variable that is stored in each and every transaction.

The second goal is to move towards tagged protocol data-structures. The idea of tagged data goes back decades, well before Bitcoin was created. The point here is that we know mistakes have been made and will continue to be made by humans extending and fixing Bitcoin. For this reason we need the ability to cleanly make backwards compatible changes. Adding a new field in an existing p2p message is much cleaner than having to create an entire new message-type with all the same info, plus one item.

Note; The basic concepts of Bitcoin are sane and sound, we should not change those!

3 Access to blockchain as a database

Bitcoin as an industry depends on the blockchain as the universal database that everyone shares and uses. The main property of a database is that it can provide fast access to the information you seek. A normal database would be able to return all transactions since a certain date, as a quick example.

Unfortunately the blockchain in any full node has data-access methods that are quite primitive and very very slow, making the blockchain essentially private data. This means that block explorers end up having to re-create a full database. Research of usage patterns and many properties are limited to a few that have plenty of patience.

The third goal is to move towards a full node that provides full access to Bitcoin, including its database. The simple access of the raw data is something that can be made substantially faster and it opens the node to being much more useful for a large range of features.

This blog originated from the classic long term roadmap which has a second section going into more detail on each of those goals.

Join the Bitcoin Classic community

Do these goals align with what you are looking for in Bitcoin? Please consider joining us. Running the client, sending an email when you find a typo or simply sharing your story on reddit are great ways to start. Read the Classic community page for many more ways to join this exciting movement.

Read full... Posted Tags:
Bitcoin is Made by People
Tom Zander
Jan 11, 2017

Bitcoin Classic is about 1 year old, so let’s take a quick look back.

When Bitcoin Classic was founded, our goal was to work with the established Bitcoin development teams and grow Bitcoin on-chain. The obvious and only factor we were focused on was the limit on the size of blocks. Bitcoin Classic compromised - deeply. We went for the most conservative possible increase in block size. From 1MB to 2MB.

The response we triggered, within days of the release was shocking. Several miners, developers and the BlockStream company president agreed that Bitcoin Classic was not allowed to compete with the bitcoin establishment, and they made their proclamations publicly and openly.

In the following months the implications of this action began to sink in. Those who want Bitcoin to grow to allow more users realised compromise with the establishment was a mistake. After a year of working together, compromises and peace offerings, the establishment didn't just reject all of them, it actually rejected the idea of decentralisation and open discussion.

In the latest version of Bitcoin Classic we no longer compromise. The user is at the forefront and we have listened to the many voices which, today find the system to be slow and expensive.

The solution, included in Bitcoin Classic 1.2, takes the power of setting the maximum block size away from the “establishment”, those who have shown they cannot be trusted to represent the bitcoin community.

They have shown they are not tolerant of open discussions nor educated disagreement, and these established developers have abused their first-mover position, to centralise decision making power in Bitcoin, the first decentralized P2P currency.

Power in Bitcoin is decentralized by design.

Centralized decision making is successful when people trust and support the software organization, without the need for questioning the established group of developers. However, the true power lies with all of the individual participants in the network. Only for as long as individuals keep running centralized software, using centralized services and tolerating centralized solutions will any established group have power over Bitcoin.

The solution for the block size debate is to remove the ability of software developers to set a maximum block size limit. Instead, give the power of this limit to the people. The limit can now be set by the full node operator, the exchanges, the online wallets and the home users.

Because with Bitcoin we can route around centralised decision making. Bitcoin Classic joins Bitcoin Unlimited to give people the tools to do so. People running one of these Bitcoin clients no longer have a block size limit.

But make no mistake - this is not a fight between software groups.

To be clear: because Bitcoin is decentralised in all aspects, we all decide what Bitcoin is, by running code and participating in the system. If we want to change something, we chose different code and when the collective joins you in your choice, Bitcoin changes - for the better.

Every participant makes their own choice following their own self best interest within the collective system. There is nothing in the code or system that prevents this from happening. In fact it is deeply embedded in Bitcoin as a dynamic system that evolves and grows without any centralized decision making.

You have to stand up for your opinion because nobody else can.

People like you can make their voice heard by doing something as simple as running Bitcoin Classic. You can also find other ways to make your voice heard by writing to companies that are undecided and explain your position or simply link to this text.

Bitcoin Classic works for the people, and if people act to maintain Bitcoin as a decentralized project, the best will happen.

Blocksize Consensus

As we near the closing of 2016 (year of the Monkey) there is a growing awareness on how we can solve the question of block size limits for many years to come. Please see here for an overview of this.

The idea is that we can move from a centrally dictated limit to one where the various market participants can again set the size in an open manner. The benefit is clear, we remove any chance of non-market participants having control over something as influential as the block size.

A good way to look at this is that the only ones that can now decide on the size are the ones that have invested significant value, believing in an outcome that is best for Bitcoin as a whole.

The credit goes to Andrew Stone for coming up with this concept of building consensus. He also published another, intertwined, idea which is the Acceptable Depth (AD) concept.

If the open market for block sizes is still making people uncomfortable, the AD concept is causing pain. The concept has had many detractors and the way to handle complaints and attack-scenarios has been to add more complexity to it. The introduction of something called the "sticky gate" that was created with no peer feedback is a good example as there is now an official request created by stakeholders which asks for it to be removed again. The result of that request was for the author to not remove it, but suggest yet more complexity.

I really like the initial idea where the market decides on the block size. This is implemented in Bitcoin Classic and I fully support it.

The additional idea of "Acceptable Depth (AD)" was having too many problems so I set out to find a different solution. This part of Andrew's solution is not a consensus parameter. Any client can have their own solution without harming compatibility between clients.

What is the problem?

In the Bitcoin reality where any full node can determine the blocksize limits we expect a slowly growing block size based on how many people are creating fee-paying transactions.

A user running a full node may have left his block size limit a little below the size that miners currently create. If such a larger-than-limit block comes in, it will be rejected. Any blocks extending that chain will also be rejected and this means the node will not recover until the user adjusts the size limits configuration.

It would be preferable to have a slightly more flexible view of consensus where we still punish blocks that are outside one of our limits, but not to the extend that we reject reality if the miners disagree with us. Or, in other words, we honour that the miners decide the block size.

In the original "AD" suggestion this was done by simply suggesting we need 5 blocks on a chain when the first is over our limits. After that amount of blocks has been seen to extend the too-big one, we have to accept that the rest of the world disagrees with our limits.

This causes a number of issues;

  • It ignores the actual size of the violating block. It is treated the same if its 1 byte too large or if its various megabytes too large. This opens up attacks from malicious miners.
  • There is no history kept of violating blocks. We look at individual violating blocks in isolation, making bypassing the rules easier.

Solving it simpler

Here I propose an much simpler solution that I have been researching for a month and to better understand the effects I wrote a simulation application to see the effect of many different scenarios.

I introduce two rules;

  1. Any block violating local limits needs at least one block on top before it can be added to the chain.

  2. Calculate a punishment-score based on how much this block violates our limits. The more a block goes over our limit, the higher the punishment.

How does this work?

In Bitcoin any block already has a score (GetBockProof()). This score is based on the amount of work that went into the creation of the block. We additionally add all the blocks scores to get a chain-score.

This is how Bitcoin currently decides which chain is the main chain: the one with the highest chain-score.

Should a block come in that is over our block size limit, we calculate a punishment. The punishment is a multiplier against the proof of work of that block. A block that is 10% over the limit gets a punishment score of 1.5. The effect of adding this block to a chain is that it adds the blocks POW, then subtracts 1.5 times that again from the chain. With the result that the addition of this block removes 50% of the latest blocks proof of work value from that chain.

The node will thus take as the 'main' chain-tip the block from before the oversized one was added. But in this example, if a miner appends 1 correct block on top of this oversized one, that will again be the main chain.

Should a block come in that is much more over size limit, the amount of blocks miners have to build on top of it before the large block is accepted is much larger. A 50% over limit block takes additional 6 blocks.

The effect of this idea can be explained simply;

First, we make the system more analogue instead of black/white. A node that has to chose between two blocks that are both oversized means we choose the smallest one.

Second, we can make the system much more responsive where a block that only violates our rules a tiny little bit at height N and then if some other miner adds a block N+1, this means we can continue mining already at block N+2. On the other hand, an excessive block takes various more blocks on top of it to get accepted by us. So its still safe.

Last, we remove any attack vectors where the attacker creates good blocks to mask the bad ones. This is no longer possible because the negative score that a too big block creates will stay with this chain.

Visualising it

This proposal is about a node recovering when consensus has not been reached. So I feel OK about introducing some magic numbers and a curve that practice shows a good result in various use-cases. This can always be changed in a later update because this is a node-local decision.

On the horizontal axis we have the block size. If the size is below our limit, there is no punishment. As soon as it gets over the block limit, there is a jump and we institute a punishment of 50%. This means that it only adds 50% of the proof-of-work score of the new block.

If the block size goes more over limit, this punishment grows very fast taking 6 blocks to accept a block that is 150% of our configured block size limit.

The punishment is based on a percentage of the block size limit itself, which ensures this scales up nicely when Bitcoin grows its acceptable block size. For example a 2.2MB block on a 2MB limit is 10% punishment. We add a factor and an offset making the formula a simple factor * punishment + 0.5.

The default factor shown in the graph is 10, but this is something that user can override should the network need this.


Using a simple-to-calculate punishment and the knowledge that Bitcoin already calculates a score for the chain based on the proof-of-work of each block, we can create a small and simple way to find out how much the rest of the world disagrees with our local world-view. In analogue, real world measurements.

Nodes can treat the size limits as a soft limit which can be overruled by mining hash power adding more proof of work to a chain. The larger the difference in block size and limit, the more proof of work it takes.

Miners will try to mine more correct chains, inside of the limits, fixing the problem over time, but won't waste too much hash power pushing out one slightly oversized block in a chain.

This is not meant as a permanent solution for nodes, and full node operators are not excused from updating their limits. On the other hand a node that is not maintained regularly will still be following Bitcoin, but it will trail behind the main chain by a block.


Currently we only have block-size as a relevant variable. If we add more variables that are taken out of the consensus rules and made market-set, we can decide or make configurable a proportional system. So, for instance, if we add sigop counts, those violations may only count for 30% of the punishment whereas blocksize overdraft account for 70% of the punishment. The actual punishment is still based on the actual overdraft in an analogue (super) linear fashion.

Classic is Back

Bitcoin Classic was started as a response to the market need for bigger blocks and an alternative developer group to provide it, as primary and first way to get Bitcoin to scale to the next million. Since that first release in February we have come a long way. The Bitcoin landscape has changed, and mostly for the better. This is an accomplishment a lot of us can be proud of.

Our two main goals were to have more distributed development and to have bigger blocks. We are a lot closer to both ideals, but there is more work to be done.

I personally started this journey some 3 years ago. I tried the code that was Bitcoin Core and wanted to improve upon some things that annoyed me. Now 3 years later I have seen every file and most of the code of what is in Bitcoin. It is a very rewarding problem to study and can occupie one for years.

My primary role in Bitcoin Classic is one of release manager. This essentially means I take the responsibility to get the highest quality releases. Next to that I love to innovate and write code. I've been doing a lot of that over the last months.

So, why title this blog "Classic is back"? The truth is, it never really went away. I think its fair to say that Classic stepped aside when the 2MB band-aid solution we shipped caused a lot of market response and discussion. But Classic has always been there. Busy, coding, improving things.

One of the changes we have seen in the market is a change in understanding how all Bitcoin software can work together. This idea has been used many times before, so has stood the test of time already. It is about how we can have a limit without setting a limit in the software.
All software, when it is immature and only targets a small amount of users, has limits build in. These limits are really there because the author learned that his software is not good enough to do more and a limit is by far the easiest 'solution'.

This design isn't uncommon, the older ones among us may remember that Ms DOS had a memory size of 640kB. Emails 20 years ago had a maximum size of 1MB.

As part of growing up of that software; better solutions were found that made those limits obsolete. The limits were removed and people were happy.

Removing of the limits in software doesn't make them unlimited. It just makes other costs limit the actual size. You still can't send a DVD sized email. The maximum size now is determined by the market. A basic calculation of cost and benefit.

Lets take a look at some market incentives that a Bitcoin Miner has;

  • A miner mining bigger blocks are nice because he gets to pick up more fees.
  • On the other hand smaller blocks are nice to keep a bit of scarcity and have people still paying fees.
  • Also, bigger blocks cost more time to send and make the chance of orphans higher, both of which are costly.
  • And on the other hand, smaller blocks cause a bigger backlog and as a natural result, bad service to paying customers. We want to avoid losing them to the competition.

These are conflicting incentives. There are both reasons for bigger and there are reasons for smaller blocks. The end result will be, that blocks will have a size set based on the market demand for space in those blocks. Simply because that is the most profitable for miners to do. The good thing here is that the best situation for miners is a pretty good situation for everyone else in Bitcoin as well.

What Andrew Stone came up with is that everyone publishes the maximum size they are willing to accept and this makes the market of block sizes become an open market. Now miners can decide what the ideal block size is, without fearing that their blocks will be rejected by other miners for being too large.

A purely market driven block size is a big change for Bitcoin and it may take a bit more time before everyone gets used to it, and in the mean time Classic is working on several next-generation projects.

Network manager / Admin server

In Classic I started a project that is the Network Manager, a replacement network layer which solves many of the problems we have with the p2p code we have inherited from the Satoshi client. On top of this is currently being build an admin server. The admin server will fill the role of remote control. You can create transactions, query the block chain, sign data and ask it to make coffee. All but the last are already present and this admin server just uses that existing functionality from the 'Bitcoin-cli' app.

The place where it shines is that the speed is various magnitudes faster and it allows not just asking questions like a webserver, but the admin server has a live connection and it pushes out data. For instance when a new block comes in.

So it enabled a new set of network enabled tools that can be created with it. Some examples where its going to be used are; * Application management software which monitors any number of nodes under its control for warnings and slowness. But also for operational statistics, like how many mega-bytes have been sent. * Scientific research to query the entire blockchain without having to store it locally. * much faster uploads of big blocks from a miner to the full node.

This project is nearing 80% completion. Is Classic back on track?

Flexible Transactions (BIP134)

In the original Bitcoin 0.1 release Satoshi Nakamoto included a design that is a very good indication of his intentions for the future. He included a version field for transactions. Today, 8 years after the fact, we are still using the exact same format that was introduced as version 1 all those years ago. We are not really using this version field that Satoshi provided.

There are plenty of problems we have in Bitcoins design which would benefit immensely from actually being able to make fixes in the transaction format. And flexible transactions is a protocol upgrade that is meant to make that happen.

The Flexible Transaction proposal has been under development for a couple of months now and we currently solve quite a lot of issues in this first version.

Fixes Bitcoin issues;

  • Malleability
  • Linear scaling of signature checking
  • Hardware wallet support (proofs)
  • Very flexible future extensibility
  • Double spend proofs
  • Makes transactions smaller
  • Supports the Lightning Network
  • Support future Scripting version increase

Where are we now?

Flexible Transactions is running on a testnet. The functionality is 80% done. We have some more tests to run.

Bitcoin Classic is definitely back.

Developer friendly

Bitcoin Classic is based on the 8 years old Satoshi codebase, I myself have been spending more time than expected learning this codebase and I know from other developers I'm definitely not alone. The codebase is not even that large! The most often heard complaint is that that it lacks a modular design.

Hard to understand code leads to lower quality products. This is very simple to understand: programmers are just human and they make mistakes when they have to create clean code in a dirty environment. I think it makes a lot of sense to have a focus on improving the architecture and the code quality of Bitcoin Classic. The goal is to help developers understand the code faster and at the same time help our products to be of higher quality.

Because there is such a lack of modular design, the best way to start fixing this is to introduce this modular design. Basics like an "Application" class as a starting point for application-wide resources have been coded and more is coming.

The various products I mentioned today are also all based on common and reusable technologies. The introduction of a common message-format will be the common layer between the Flexible Transactions and the Network Manager and the Admin Server, as a quick example.

I feel its very important to work on all these improvements together with other teams that are part of Bitcoin. They have their own improvements and in the end we want to reuse each others innovations and, ultimately, code. I have had many good conversations with people like Pedro Pinheiro, Dagur Valberg Johansson (dagurval), Amaury Sechet (deadalNix), Andrea Suisani (sickpig), Freetrader and Justus Ranvier whom are great people to work with and cover the majority of the Bitcoin implementations. There is a lot of overlap in what we do and how we think.

The important part is the differences. Each Bitcoin Client is unique in its own way and this is useful for everyone. This helps to avoid the echo-chamber problem and we avoid killing ideas that are before their time. More people competing on a friendly basis will benefit everyone and that is what it means to have many implementations of Bitcoin.

Bitcoin Classic is back. Bitcoin Classic is working on some pretty exciting things and we have all come a long way.

Flexible Transactions

Update: This post made when FlexTrans was started. It is now mostly finished and got its own home on where more info can be found.

I've been asked one question quite regularly and recently with more force. The question is about Segregated Witness and specifically what a hard fork based version would look like.

Segregated Witness (or SegWit for short) is complex. It tries to solve quite a lot of completely different and not related issues and it tries to do this in a backwards compatible manner. Not a small feat!

So, what exactly does SegWit try to solve? We can find info of that in the benefits document.

  • Malleability Fixes
  • Linear scaling of sighash operations
  • Signing of input values
  • Increased security for multisig via pay-to-script-hash (P2SH)
  • Script versioning
  • Reducing UTXO growth
  • Compact fraud proofs

As mentioned above, SegWit tries to solve these problems in a backwards compatible way. This requirement is there only because the authors of SegWit set themselves this requirement. They set this because they wished to use a softfork to roll out this protocol upgrade. This post is going to attempt to answer the question if that is indeed the best way of solving these problems.

If you are the impatient type; skip to the Conclusions

Starting with Malleability, the problem is that a transaction between being created by the owner of the funds and being mined in a block is possible to change in such a way that it still is valid, but the transaction identifier (TX-id) has been changed. But before we dive into the deep, lets have some general look at the transaction data first.

If we look at a Transaction as it is today, we notice some issues.

Version4 bytes
Number of inputsVarInt (between 1 and 9 bytes)
inputsPrev transaction hash32 bytes. Stored in reverse
Prev transaction index4 bytes
TX-in script lengthCompact-int
TX-in scriptThis is the witness data
Sequence-no/CSV 4 bytes
Number of outputsVarInt (between 1 and 9 byte)
outputsValueVar int
TX-out script lengthCompact-int
TX-out scriptbytearray
NLockTime4 bytes

The original transaction format as designed by Satoshi Nakamoto opens with a version field. This design approach is common in the industry and the way that this is used is that a new version is defined whenever any field in the data structure needs changing. In Bitcoin we have not yet done this and we are still at version 1.

What Bitcoin has done instead is make small, semi backwards-compatible, changes. For instance the CHECKSEQUENCEVERIFY feature repurposes the sequence field as a way to add data that would not break old clients. Incidentally, this specific change (described in BIP68) is not backwards compatible in the main clients as it depends on a transaction version number being greater than 1, they all check for Standard transactions and say that only version 1 is standard.

The design of having a version number implies that the designer wanted to use hard forks for changes. Any new datastructure requires software that parses it to be updated. So it requires a new client to know how to parse a newly designed data structure. The idea is to change the version number and so older clients would know they should not try to parse this new transaction version. To keep operating, everyone would have to upgrade to a client that supports this new transaction version. This design was known and accepted when Satoshi introduced Bitcoin, and we can conclude his goal was to use hard forks to upgrade Bitcoin. Otherwise there would not be any version numbers.

Lets look at why we would want to change the version; I marked some items in red that are confusing. Most specifically is that numbers are stored in 3 different, incompatible formats in transactions. Not really great and certainly a source of bugs.

Transactions are cryptographically signed by the owner of the coin so others can validate that he is actually allowed to move the coins. The signature is stored in the TX-in-script.
Crypto-geeks may have noticed something weird that goes against any textbooks knowledge. Texbook knowledge dictates that a digital signature has to be placed outside of the thing it signs. This is because a digital signature protects against changes. After generating the signature you would be modifiying the transaction if you insert it somewhere in the middle and that mean you'd have to regenerate the signature because the transaction changed..

Bitcoin's creator did something smart with how transactions are actually signed so the signature actually doesn't have to be outside the transaction. It works. Mostly. But we want it to work flawlessly because currently this being too smart causes the dreaded malleability issues where people have been known to lose money.

What about SegWit?

SegWit actually solves only one of these items. It moves the signature out of the transaction. SegWit doesn't fix any of the other problems in Bitcoin transactions, it also doesn't change the version after making the transaction's-meaning essentially unable to be understood by old clients.

Old clients will stop being able to check the SegWit type of transaction, because the authors of SegWit made it so that SegWit transactions just have a sticker of "All-Ok" on the car while moving the real data to the trailer, knowing that the old clients will ignore the trailer.

SegWit wants to keep the data-structure of the transaction unchanged and it tries to fix the data structure of the transaction. This causes friction as you can't do both at the same time, so there will be a non-ideal situation and hacks are to be expected.

The problem, then, is that SegWit introduces more technical debt, a term software developers use to say the system-design isn't done and needs significant more work. And the term 'debt' is accurate as over time everyone that uses transactions will have to understand the defects to work with this properly. Which is quite similar to paying interest.

Using a Soft fork means old clients will stop being able to validate transactions, or even parses them fully. But these old clients are themselves convinced they are doing full validation.

Can we improve on that?

I want to suggest a way to one-time change the data-structure of the transaction so it becomes much more future-proof and fix the issues it gained over time as well. Including the malleability issue. It turns out that this new data-structure makes all the other issues that SegWit fixes quite trivial to fix.

I'm going to propose an upgrade I called;

Flexible Transactions

Last weekend I wrote a little app (sources here) that reads a transaction and then writes it out in a new format I've designed for Bitcoin. Its based on ideas I've used for some time in other projects as well, but this is the first open source version.

The basic idea is to change the transaction to be much more like modern systems like JSON, HTML and XML. Its a 'tag' based format and has various advantages over the closed binary-blob format.
For instance if you add a new field, much like tags in HTML, your old browser will just ignore that field making it backwards compatible and friendly to future upgrades.

Further advantages;

  • Solving the malleability problem becomes trivial.
  • We solve the quadratic hashing issue.
  • tag based systems allow you to skip writing of unused or default values.
  • Since we are changing things anyway, we can default to use only var-int encoded data instead of having 3 different types in transactions.
  • Adding a new tag later, (for instance ScriptVersion) is easy and doesn't require further changes to the transaction data structure. All old clients can still make sense of all the known data.
  • The actual transaction turns out to be about 3% shorter average (calculated over 200K transactions)
  • Where SegWit adds a huge amount of technical debt, my Flexible Transactions proposal instead amortizes a good chunk of technical debt.

An average Flexible Transaction will look like this;

TxStart (Version)0x04 TX-ID data
inputsTX-ID I try to spend1 + 32 bytes
Index in prev TX-IDvarint
outputsTX-out Value (in Satoshis)VarInt
TX-out scriptbytearray
inputsTX-in-script (Witness data)bytearray WID-data

Notice how the not used tags are skipped. The NLockTime and the Sequence were not used in this example, so in that case they would not take any space.

The Flexible Transaction proposal uses a list of tags. Like JSON; "Name:" "Value". Which makes the content very flexible and extensible. Just instead of using text, Flexible Transactions use a binary format.

The biggest change here is that the TX-in-script (which segwit calls the witness data) is moved to be at the end of the transaction. When a wallet generates this new type of transaction they will append the witness data at the end but the transaction ID is calculated by hashing the data that ends before the witness data.

The witness data typically contains a public key as well as a signature. In the Flexible Transactions proposal the signature is made by signing exactly the same set of data as is being hashed to generate the TX-input. Thereby solving the malleability issue. If someone would change the transaction, it would invalidate the signature.

I took 187000 recent transactions and checked what this change would do to the size of a transaction with my test app I linked to above.

  • Transactions went from a average size of 1712 bytes to 1660 bytes and a median size of 333 to 318 bytes.
  • Transactions can be pruned (removing of signatures) after they have been confirmed. Then the size goes down to an average of 450 bytes or a median of 101 bytes
  • In contrary to SegWit new transactions get smaller for all clients with this upgrade.
  • Transactions, and blocks, where signatures are removed can expect up to 75% reduction in size.

Broken OP_CHECKSIG scripting instruction

To actually fix the malleability issues at its source we need to fix this instruction. But we can't change the original until we decide to make a version 2 of the Script language.
This change is not really a good trigger to do a version two, and it would be madness to do that at the same time as we roll out a new format of the transaction itself. (too many changes at the same time is bound to cause issues)

This means that in order to make the Flexible Transaction proposal actually work we need to use one of the NOP codes unused in Script right now and make it do essentially the same as OP_CHECKSIG, but instead of using the overly complicated manner of deciding what it will sign, we just define it to sign exactly the same area of the transaction that we also use to create the TX-ID. (see right most column in the above table)

This new opcode should be relatively easy to code and it becomes really easy to clean up the scripting issues we introduce in a future version of script.

So, how does this compare to SegWit.

First of all, introducing a new version of a transaction doesn't mean we stop supporting the current version. So all this is perfectly backwards compatible because clients can just continue making old style transactions. This means that nobody will end up stuck.

Using a tagged format for a transaction is a one time hard fork to upgrade the protocol and allow many more changes to be made with much lower impact on the system in the future. There are parallels to SegWit, it strives for the same goals, after-all. But where SegWit tries to adjust a static memory-format by re-purposing existing fields, Flexible transactions presents a coherent simple design that removes lots of conflicting concepts.

Most importantly, years after Flexible transactions has been introduced we can continue to benefit from the tagged system to extend and fix issues we find then we haven't thought of today. In the same, consistent, concepts.

We can fit more transactions in the same (block) space similarly to SegWit, the signatures can be pruned by full nodes without causing any security implications in both solutions. What SegWit doesn't do is allowing unused features to not use space. So if a transaction doesn't use NLockTime (which is near 100% of them) they will take space in SegWit but not in this proposal. Expect your transactions to be smaller and thus lower fee!

On size, SegWit proposes to gain 60% space. Which is by removing the signatures minus the overhead introduced. In my tests Flexible transactions showed 75% gain.

SegWit also describes changes how data is stored in the block. It creates an extra merkle tree it stores in the coinbase transaction. This requires all mining software to be upgraded. The Flexible Transactions proposal is in essence solving the same problem as SegWit and uses a cleaner solution where the extra merkle-tree items are just added to the existing merkle-tree.

At the start of the blog I mentioned a list of advantages that the authors of SegWit included. It turns out that those advantages themselves are completely not related to each other and they each have a very separate solution to their individual problems. The tricky part is that due to the requirement of old clients staying forwards-compatible they are forced to push them all into the one 'fix'.

Lets go over them individually;

Malleability Fixes

Using this new version of a transaction data-structure solves all forms of known malleability.

Linear scaling of sighash operations

If you fix malleability, you always end up fixing this issue too. So there is no differnence between the solutions.

Signing of input values

This is included in this proposal.

Increased security for multisig via pay-to-script-hash (P2SH)

The Flexible transactions proposal outlined in this document makes many of the additional changes in SegWit really easy to add at a later time. This change is one of them.

Bottom line, changing the security with a bigger hash in SegWit is only included in SegWit because SegWit doesn't provide a solution to the transaction versioning problem, and this solution is what makes it trivial to do separately.
With flexible transactions this change can now be done at any time in the future with minimal impact.

Script versioning

Notice that this only introduces a version so we could increase it in future. It doesn't actually introduce a new version of Script.

SegWit adds a new byte for versioning. Flexible Transactions has an optional tag for it. Both support it, but there is clearly a difference here.

This is an excellent example where tagged formats shine brighter than a static memory format that SegWit uses because adding such a versioning tag is much cleaner and much easier and less intrusive to do with flexible transactions.

In contrast, SegWit has a byte reserved that you are not allowed to change yet. Imagine having to include "body background=white" in each and every html page because it was not allowed to leave it out. That is what SegWit does.

Reducing UTXO growth

I suggest you read this point for yourself, its rather interestingly technical and I'm sure many will not fully grasp the idea. The bottom line of that they are claiming the UTXO database will avoid growing because SegWit doesn't allow more customers to be served.

I don't even know how to respond to such a solution. Its throwing out the baby with the bath water.

Database technology has matured immensely over the last 20 years, the database is tiny in comparison to what free and open source databases can do today. Granted, the UTXO database is slightly unfit for a normal SQL database, but telling customers to go elsewhere has never worked out for products in the long term.

Compact fraud proofs

Again, not really included in SegWit, just started as a basis. The exact same basis is suggested for flexible transactions, and as such this is identical.

What do we offer that SegWit doesn't offer?

  • A transaction becomes extensible. Future modifications are cheap.
  • A transaction gets smaller. Using less features also takes less space.
  • We only use one instead of 3 types of encodings for integers.
  • We remove technical debt and simplify implementations. SegWit does the opposite.


SegWit has some good ideas and some needed fixes. Stealing all the good ideas and improving on them can be done, but require a hard fork. This post shows that doing so gives you advantages that are quite significant and certainly worth it.

We introduced a tagged data structure. Conceptually like JSON and XML in that it is flexible, but the proposal is a compact and fast binary format. Using the Flexible Transaction data format allows many future innovations to be done cleanly in a consistent and, at a later stage, a more backwards compatible manner than SegWit is able to do, even if given much more time. We realize that separating the fundamental issues that SegWit tries to fix all in one go, is possible and each becomes much lower risk to Bitcoin as a system.

After SegWit has been in the design stage for a year and still we find show-stopping issues, delaying the release, I would argue that dropping the requirement of staying backwards compatible should be on the table.

The introduction of the Flexible Transaction upgrade has big benefits because the transaction design becomes extensible. A hardfork is done once to allow us to do soft upgrades in the future.

The Flexible transaction solution lowers the amount of changes required in the entire ecosystem. Not just for full nodes. Especially considering that many tools and wallets use shared libraries between them to actually create and parse transactions.

The Flexible Transaction upgrade proposal should be considered by anyone that cares about the protocol stability because its risk of failures during or after upgrading is several magnitudes lower than SegWit is and it removes technical debt that will allow us to innovate better into the future.

Flexible transactions are smaller, giving significant savings after pruning over SegWit.

Scaling Bitcoin

One of the most talked about topics in 2016 when it comes to Bitcoin is the lack of a good plan for growing and scaling the system into the future.

I find this curious, as anyone that starts to depend on any system should do at least some investigation on what kind of growth it can handle and whether it can actually do what we want it to, this year and 5 years or longer into the future.

In this post I want to investigate what kind of scaling we can expect Bitcoin to have and what we need to do to get more scaling out of the system if these expectations prove not to be enough.


The number one reason that Bitcoin has value right now is its promise that Bitcoin will be used and seen as useful by millions of people in the not-so-far future. Any money only has value when enough people assign it value. Any money only has value when it can actually be used. If nobody accepts it for payments, the value will not be realized.

So the number one goal is to allow millions of people to be able to use Bitcoin in their day-to-day lives, where 'use' is defined as making at least one transaction a day. As with any technology, we don't expect this load to be available tomorrow or even this year, as growth happens over time and systems get built over time.

So let's have a goal of 50 million users sending one transaction a day using the Bitcoin network. Not today but 5 years into the future.

A further goal for home-users is that the rate at which they can process Bitcoin blocks should be at least five times the speed at which they get created. This means that if a system has no internet for an hour it would take around 20 minutes at full speed to catch up (process the backlog and process all new transactions generated while processing the backlog).
Faster is better, but slower than 5 times the creation speed is too slow.


Or, what is the current theoretical level of support?

Bitcoin, in the form of various node-software implementations, has had several years to mature. The leadership was not really focused on large growth of this basic layer, as we can see from the uptick in progress that started happening after Bitcoin Classic and Bitcoin Unlimited started competing with them. As usual, competition is good for the end-user and we see some promise of future gains appear.

But let's take a look at what kind of transaction load we could support today.

Last week published a video about time it takes to download, fully validate and check 7 years, or 420000 blocks of Bitcoin history (from day one of Bitcoin). This is 75GB of data which took 6 hours and 50 minutes to fully validate on mid-range hardware. It wasn't cheap hardware, but it was certainly not top-of-the-line or server hardware. In other words, it is a good baseline.

Taking a closer look at what was done in 6hours 50min:

  • since block 295000 till block 420000 (125000 blocks) each and every transaction signature has been validated.
  • 75GB was downloaded from Bitcoin peers around the world.
  • A UTXO database of 11 million transactions with 40 million not yet spent Bitcoin addresses was built. (see the RPC call gettxoutsetinfo).
  • The 125000 blocks contained 104847758 transactions which were all validated. That's 105 Million txs.

We understand that this was all done with no tweaks to the settings. This was a Classic 1.1.0 node (with performance equivalent to Bitcoin Core 0.12.1)

What needs work?

Let's take a look and compare our baseline to our goal. Most people would like software to always get faster and better, but priorities matter and we should look at which parts need attention first.

Our goal was that individual nodes need to be able to process about 50 million transactions in 24 hours.

We noticed that in the baseline check of 6h50m, our node actually downloaded, stored, and checked almost 105 million transactions. This leads to a daily rate of 368 million transactions being downloaded and validated.

$$ TX_{day} = {{104847758 tx \over { 6 * 60 + 50 minutes}} * 24 * 60 minutes} = 368 million$$

This means that our 5-years-in-the-future goal of 50 million transactions per day is already not an issue for bandwidth and for CPU power today. In fact, our baseline system managed to exceed our goal by a factor of 7.

Our baseline system could handle 7 times our 5 years in the future goal volume today!

Today, an individual Bitcoin node can download, store and validate 368 million transactions a day. That's many times the volume that has ever been sent using Bitcoin.

How do I see the system scale?

A typical home-node

A node that validates all blocks fully is needed in order to keep everyone honest. It also gives me the peace of mind that if I trust my node, I don't have to trust anyone else. So I foresee that you will get small communities gathering around full nodes. You can let your family use it, or maybe your football club or church will set one up just to support the community. Individuals will then make their phone-wallets have at least one host they trust, which is the one from your community.

This preserves Bitcoins greatest assets, you don't have to trust banks or governments. People trusting their local church or football club is much more normal and common to do.

Such a node would have no need to keep the full history from block zero. It would turn on pruning. With today's price of hardware this does not mean it would stop being able to serve historic blocks, because it could easily hold a month of blocks history. This does, however, mean we need to make the software a bit smarter. See some pruning ideas.

Hard drive space is cheap today. A 3TB harddrive stores 75 years of Bitcoin history at current block size. But what if we start getting to our goal. What kind of hard drive do we need?

The answer to that is that we don't need anything large at all. The idea that we need to have a larger hard drive because blocks are bigger is a misunderstanding. We should work on some pruning ideas in order to scale the system without everyone having to invest in storage space.

The last of the big components for our hone node is Internet connectivity. To reach our goal we need to be able to download 50 million transactions at about 300 bytes each over a 24 hours period.

$$ {50 000 000 * 300 \over {24 * 60 * 60}} = 173611 bytes/sec $$ $$ {173611 * 8} = 1.39 Mbit/sec $$

Ideally we go 5 times as fast to make sure that the node can 'catch up' if it were offline for some time. We may want to add some overhead for other parts as well. But 1.39Mbit minimum is available practically everywhere today. 2Gb which is 1440 times faster than what we need to reach our plan in 5 years is available in countries like Japan today.

Home nodes have absolutely nothing to fear from even a huge growth of Bitcoin to about 50 million daily users. Their hardware and internet will cover them with no pains, especially in about 5 years when more software optimizations may have been created.

Notice that there is no dependency on principles like Moore's law here, no hardware growth is needed to reach our goal.

A typical user

A typical user is suggested to use a phone or hardware wallet. The actual requirements are really low to be able to make payments safely, and fast if you use SPV clients.

Current wallets are in need of some work, though. They are typically quite greedy in using network to update the state of the wallet. While useful, this is not wanted in many situations: when I'm abroad on an expensive data-plan, for instance.

There is no need to do any communication with the network before creating a transaction that pays a merchant. Only the actual payment itself needs to be transferred to the Bitcoin network.

A typical user uses a phone. On the topic of scaling there is little to nothing they must do in order to continue working with on-chain scaling.

Usability and related topics in thin clients do need substantial amounts of work though.

A typical mining node

Miners need a full node. Where 'full' means that it validates all transactions. In addition to what a home-node needs, a miner also needs a fast connection between miners and to have a fast way to broadcast his blocks to other miners.

Just like with a typical home-node the amount of bandwidth and harddrive and CPU speed are already today mostly sufficient for being part of the network.

Additionally, a miner uses their node to create block templates. Which means they takes a section of the pool of unconfirmed transactions and creates a block based on that. This process has seen some optimizations already, but more could be made. For instance the getblocktemplate RPC call checks the block it just created for validity before it returns it. This check takes quite a lot of time and a simple solution would be to decouple the returning of the block and the validation so the miner can start mining optimistically over the check passing (it should pass in 100% of the cases anyway).

The bigger the blocks get, the more data is returned, and the system currently uses JSON which is almost the worst type of data-container for large binary data-blobs. A simple replacement of the RPC interface with something that just changes the communication format to be binary is relatively easy to do (a month project, probably) and likely needed for miners to not end up waiting too long on interface delays.

In our baseline node we explained that it took 7 hours to fully sync a brand new node from zero. This will stop being the case when we scale up to much bigger sizes. It will start taking a substantial amount of time to do the initial sync. Yet, a miner requires a fully synced node. Bitcoin Classic already has one big change there that will push down the validation time substantially. It introduced dynamic checkpoints which allow the node to skip validation of transaction data by assuming that about a week's worth of blocks will not be built on top of double-spend data. This would remove the validation of 100s of millions of transactions for a starting node.

Another suggestion for future Bitcoin clients meant for miners is that a new node can be pointed to a known and trusted node. The new node would then receive the UTXO and all other details it needs to be up and running quickly from this trusted node. Which means that after downloading only a couple of gigabytes you can have your new node up in 10 minutes.

The most important improvements for mining are various ways to ensure fast download and upload of the actual new blocks.

First there is xthin, which is a way to make a 1MB block only cost 25KB to send to all miners. This scaling is linear in that a 10MB block will likely be around 250KB to send.

Next to that is a technique I called "optimistic mining", which helps miners by splitting the uploading of blocks into two parts. One is a super fast notification of the new block. Just the block-header. A miner receiving such a header validates it has valid proof of work and then can start mining empty blocks on top. When the full block has arrived and all transactions are seen. All transactions in the mempool are updated to account for the new block, and last, a new block template is created with as many transactions as fit, only then does the miner start mining it.

A mining node has no need for either a wallet in their node or to have a history of blocks on their node, so they can turn on pruning.

Many of these techniques are already in development or planned to be developed in the next year or so. In order to reach our 50 million users per day in 5 years most of these will be more than enough to make a miner able to keep connected to the Bitcoin network without having to invest in a high-end server for the Bitcoin node.


The goal I tried to argue from is 50 million users per day. This goal is a huge increase from today. But to make sure we do it properly, my goal is set 5 years in the future.

Scaling Bitcoin nodes is ultimately boring work with very little effort needed because it turns out that a modern simple system can already scale easily to 10000 times higher than the current maximum allowed size.

Scaling the entire system takes a little more work, but mostly because miners have not received a lot of new features that they would need in order to make scaling safe for them. Most of those features could be added in a matter of months, with technologies like xthin blocks and optimistic mining already well underway.

The conclusion that I have to draw is that the goal of 50M/day is not just reachable, the timeline of 5 years is likely one that we will beat quite easily.

Smart tricks like Lightning network are not mentioned at all in this document because there is no need for them. Bitcoin can scale on-chain quite easily with almost no risk. Ideas like Lightning are quite high risk because there are so many unknowns.

By far the biggest problem with regards to scaling today is the protocol limit of 1MB block size. This should be removed as soon as possible.

Edit history
  • Fixed a KB -> Mb calculation
  • fixed some grammer and other tweaks (thanks Philip!)
  • Changed the 'safe multiplier' from 2 to 5.
Innovation - OnlineScaling

Last weekend we had the On-chain scaling conference which was a big success with lots of excellent speakers and a large amount of visitors asking questions and showing interest.

I understand that a lot of people didn't manage to see them live for various reasons. Not the least of them being time-zone differences.

When my talk went onto YouTube I decided to put up a transcript here. The actual talk can be found on youtube.


About 10 years ago I worked at Trolltech, a small Norwegian company making tools for developers to build beautiful applications. I worked there for some years when we got acquired by Nokia. Nokia wanted our development libraries to work fast and smooth on their phones as well as on traditional desktops.

As this was a developer-driven company many developers came up with solutions on how to reach that goal. These developers showing off their ideas meant we had lots of awesome projects to chose from. The projects can roughly be split into 2 sides. The "optimization" side which easily fixes a hundred little speed problems . And a "Big-Changes" side. These are projects that require much more time because they change some fundamental ideas, but they also have a lot larger return on investment.

I would argue that Bitcoin is in the same position now. We have to get better and we have to scale Bitcoin for many more expected users coming in over the next years.

My experience with this process in Trolltech was that all the small, low-risk changes to make it faster were a huge success. They were needed, and they made a huge difference.

On the other hand, the larger "Big-Changes" inevitably introduced more risk and they forced people using our software to change their behavior to actually use the new concepts and ideas. For those reasons their effort ended up taking several years to bear fruit.

In the end both groups pulled it off, and the results are in my personal opinion simply unique and amazing.

Some lessons I learned from those projects at Trolltech are;

  • we need researchers to do what they do best. Come up with strange and exciting ways to do things better.

  • we also need people doing the boring, "optimization" work, making networking protocols work faster and more reliably as a simple example.

This conference presents various "optimization-level" changes that will make Bitcoin scale much better than it does now. With very little pain, and from experience I expect quite a big gain. We call this on-chain scaling, because whatever your belief is on which path leads to Bitcoins success, if we want to scale, we need on-chain transactions to scale beyond the current 3 transactions per second.

For the next years some changes are being prepared which by the nature of them being fundamental changes in the way that Bitcoin will end up being used, will take much longer to create and rollout. I've seen plans like Segregated Witness and the Lightning Network, interesting ideas, for sure. Various teams are working on that and I wish them all the luck. It is already clear that we can't expect those scaling solutions to have any reasonable impact without on-chain scaling solutions being rolled out as well.

This makes sense off-course, when you think about it, you need the foundation to be strong when building on top of it.

Bitcoin, in a nutshell

Now, enough with the high level talk, lets see some actual ideas.

This is the idea of Bitcoin. We have a single computer that receives transactions and groups them all in its memory pool. What it does at the same time is Bitcoin checks each of those transactions for validity and after some time those transactions are combined into a block.

Every single transaction entering this Bitcoin computer is checked that the transaction is a legal one. For instance, a payment from person A to person B can only be created by person A. We use cryptographic proof to check that the person initiating the transaction is actually the owner of the coins he or she he sending.

Bitcoin also checks that people don't try to send the digital coins to two people at the same time. The Bitcoin application holds all transactions in storage and if one comes in that tries to spend already spent money, it is simply rejected.

When a block comes in it makes the transactions contained in that block permanent and Bitcoin will merge that block into its database. The transactions it had in memory are now removed if they were already in the block, or no longer valid because of other transactions in the block.

Bitcoin is essentially fully operational with just one node, but it works with duplication of its functionality over different nodes. Here on screen are 5 nodes.

When a transaction comes in, the node validates it and if its valid the node will forward it to all its neighbors. Which do the same and forward it to their neighbors. In less than 1 second 50% of the network will have the transaction, in less than 4s practically all of them will have it.

These nodes are in many respects exact duplicates, so technically there is little reason to have more than one node, except maybe failover. Politically this duplication means we avoid centralized decision making. We avoid one operator being able to tell the rest of the world what Bitcoin will look like today. This is assuming we make the system work fast enough.

Blocks, the big collections of transactions, get copied between nodes as well. The propagation path is similar to transactions. Currently we send blocks as a complete chunk of data in one go. This is then validated by a node to follow the Bitcoin rules and forwarded to its neighbors.

A healthy network has a large number of Bitcoin nodes to ensure operation and to allow some of them to be removed without harming Bitcoin as a whole. Damage to any part of the network will just get routed around.

Naturally, with a large number of nodes, how they are networked together becomes more important and that networking layer itself becomes a source of failure.

What I've skipped over so far is where the blocks come from. A block is created by a miner. See, here is one in the bottom left corner.

It creates a block and sends it over the network. Each node turning blue means they received the data.

Now, because a block is actually a substantial amount of data, it will take longer than the one second that a single transaction took. reports between 10 seconds and a minute to propagate a 1MB block to practically the whole network.

If we start making bigger blocks, this time will go up. More data means slower propagation times.

As the mining difficulty is adjusted every 2 weeks to make sure that the time between blocks stays on average 10 minutes, the time it takes to send an entire block is not a big deal for end users. As long as they can download blocks faster than they get created, things will continue working. Not optimally, but it works.

There are many people using Bitcoin that have higher requirements than that, though, and the current way that Bitcoin works leaves a lot of space for improvement.

How does this work for miners?

I'm currently focussing on miners. How to make sure that miners get the best out of the system. To explain what I'm working on I will first show what happens with miners using the Bitcoin software.

I'm going to draw the time as shown from the perspective of two miners. One at the top of the screen, and another miner in another physical location at the bottom of the screen.

Two miners are mining and one finds a nonce that makes his block valid. So this miner is happy, he just made several thousands of dollars worth of money. At least, as long as he can convince the rest of the world to use this block he just completed to build on top off for following Bitcoin blocks. This implies his block is seen by the rest of the network before any competitors block have finished their block.

To do so, he sends this block over the network to another miner. The mean propagation time measured by Peter Rizun for blocks between 900kb and 1MB shows about 4 seconds, but as soon as you hop over the great firewall of China this rises to a mean of 17.4 seconds. I'll use the real-time example of 10 seconds here mostly because I don't talk so fast anyway.

Now, after the miner on the bottom received the block, he will be able to start mining on top of that.

What you may notice is that the miner on the bottom is essentially doing work for about 10 seconds that is useless. He was still competing in the competition for block 1 while that race has already been won by another. He just didn't know that yet.

Lets take a look at my proposal using "Optimistic mining", which tries to fix this specific little issue.

Same situation, one miner finds a block. Here is a difference. I split up the block into two parts, one is teeny tiny and send over the network in close to the speed of light and the other is the actual block, unchanged from before.

While the entire block is being sent, the miner at the bottom is now all of a sudden able to mine on top of this new block. Therefore no longer losing money doing work that is useless. Because the full block with all the transactions hasn't arrived yet he can't know which transactions were included in it, and thus which transactions he can safely include in his block. The solution to this is for his block to be empty.

The miner switches to mining a full block as soon as he received the whole block with transactions and is able to decide which transactions to include in his block.

The 10 seconds highlighted here is the propagation time of a full block. This time can be influenced by many things. Bigger blocks can make it take longer. Network attacks may make it essential to ask the original miner for more details, slowing down the propagation time. And most importantly, as we currently see with the great-firewall of china, some countries may just not have such a great connection to the rest of the world. I don't think its a good idea that a miner in Australia loses money just because he has a slower connection to the rest of the world.

Optimistic mining means a miner is made immune to any propagation delays of the transactions in a block. This only can mean good things to avoid centralization of mining power.

In my designs the teeny-tiny part of the block a miner sends out as soon as he finds a block will be just a block header making it approximately 100 bytes. A node can validate this with a cheap check and then forward it.

Two details are important here; the data is forwarded by any node to its peers and thats it. There is no more network traffic that a peer generates based on this. It is just a way to broadcast a new piece of data as fast as possible, as far as possible.

As I promised, these changes are mostly boring and really not noble price worthy. I still get excited from changes that have a substantial benefit, though. But I'm just that kind of geek. These changes do help a lot when you add many of them together and end up with a massively better scaling system than what we have today.

What to work on next

If we look at what it means to scale Bitcoin today we look at all the obvious resources that any computer program needs. Most of them are really not an issue. For instance it would take 75 years to fill up a mainstream harddrive of 3TB with todays block size. People run the client on a raspberry Pi. And it copes just fine.

In Bitcoin the biggest scaling issue we have today is the inefficient use of bandwidth and the poor peer to peer network. This is demonstrated best by the usage of a centralized relay system which operates outside of the Bitcoin network that miners use to send blocks to each other as fast as possible. As long as those centralizes systems are needed we will have a permissioned system where a new player needs permission to enter. New player that don't get permission to use the relay system that all the other miners are on, will not be able to compete. We don't want a system where the relay operators get to decide what miners can and can not do, under threat of losing their access and subsequently their business.

In Bitcoin we currently have what I'd label a first generation p2p network. Think about a system that students typically set up in class during computer science as a play thing. Its making all the traditional mistakes.

Further research

The Bitcoin network uses unlabeled binary blobs to send over the wire. This implies that developers can never fix a message because old clients would not understand it and likely fail spectacularly. Tagged formats, the most well known of which is JSON, solved this many years ago. What we need is like HTML where an old browser can still read the newest format without giving errors. It just ignores the parts it doesn't know.

When I worked for a financial company sending out half a million stock-price quotes every second, I learned a lot about how to do a binary protocol properly. I've done some extended testing already with a new on-the-wire protocol that has looks like its saving on bandwidth and speed quite substantially. But the real benefit is that it can be extended because the fields are labeled much like JSON or XML do. Its values are also strongly typed and extensible. What that means is that the protocol itself includes the information that says a value is an int, a bool or a string.

Making such a chance would be very beneficial to make the protocol more maintainable by allowing bugfixes to be made that are backwards compatible in a clean manner.

Another issue we have with our peer to peer system is based on how an individual node finds other nodes to connect to. We have some code that decides based on the IP-address numbers whom to connect to. The logic seems to be that an IP address is a good indication for location, so it searches to connect to many different locations. Unfortunately, this is overstating the correlation between addresses and distance so this is only slightly better than pure randomly choosing whom to connect to.

Why is this important? Lets take an example of 2000 nodes, all around the world that want to get a message as fast as possible to all of those nodes, my thinking is that we each node should measure the distance to another node by seeing how long it takes to send a message and the message to come back to us.

We can then order the sending of the messages so longer distance messages are send first and shorter distance messages are send later.

Think about it like this, you could send a message by courier to the next village and he sends it to the next and so on. Or you send a courier by plane to the next big city and he starts sending couriers out from there. The idea is that a message would get to the the other side of the globe in 5 hops instead of 50.

To be fair, there are really not a lot of systems out there that create a peer to peer system of this scale that have no central server and have to send messages through the entire system as fast as possible. But this lack of competition just means its going to be a nice challenge to find what works and what doesn't.

Some other details

We have some work done that fixes the problem that while downloading and validating a block that node can't relay transactions.

For our protocol to get to a more professional level we'd also need message priority. A node pushing something to the top of the message-queue to be sent.

We'd likely want to support both TCP and UDP queues where UDP is used for higher-priority but smaller messages. That would be ideal for the optimistic mining teeny-tiny block message we want to go around the world in near lightspeed.

And last we'd really need to set a maximum size for a single message. The telco's switched to packet switching in the 90s, we're long overdue. We now send a single message of one or many megabytes, experience shows that splitting that up into multiple messages or maybe 50kb, and re-assembling it on the other side will have a very positive effect on throughput because you can send bigger data to two peers at the same time, using only twice the wall-time in optimal situations. But when you are sending a large block to a peer that is very slow in receiving it, this message chunk-ING will stop slowing down your sending speed to another peer at the same time.

I've implemented this in another system where the overhead per 50KB message ended up being only 14 bytes. So I really don't see any reason to not do this.

These are useful changes that I'm hoping to see being worked on, if not by me, then someone else.

Other issues we will bump into as we get bigger blocks and higher throughput are going to be found, and fixed as we move forward. I have no doubt about that. That is the nature of this game, there is always some optimization that we haven't done yet.

Beyond the network layer are a lot of other things that show up as being useful to work on in order to allow more data to go through Bitcoin cheaper and faster. Its still young software and I don't think its been bothered with profilers or other speed measurements very often. Gavin started a benchmark suite that never really saw uptake by any of the other developers. So please expect a huge amount of benefits to happen in the next years alone for throughput as its easy pickings for optimizations when its not been done for years.

I would say that within a year the network would be safely able to handle between 20 and 50 MB blocks.

Creating a Bitcoin that, on-chain, can grow and support more users will be the thing that actually creates more value for the Bitcoin ecosystem. So while our opponents are discussing how to distribute the money of the rich differently, until they inevitably run out of money, we are working to actually create new value to increase your wealth as well as ours.

Bitcoin has a bright future.


When I first started working with Bitcoin, a couple of years ago, I tried to find out how to contribute code and to understand the Quality Assurance policies they held.

At this time I naturally was talking to people from the Bitcoin Core group, and the answers I got were quite confusing to me.

After having worked in the software industry for 20 years I had become very reliant on good quality assurance policies. As well as good usage of distributed source management control. Turns out that Core didn't really have any of that in place.

When I was asked to work with Gavin on Bitcoin Classic I first started to work on quality assurance policies and practices. And I'm happy to say that we have reached at least industry standard, if not a little above. But I haste to say that there is still room for improvement.

Git workflow

The first thing that I changed is that we now have a proper git workflow. This means that developers fix bugs on the stable branch and we use git merges to make those bugfixes go up to the development branch. You will find this process in any git book, so I wont' spent too much time on this. The important part is that merges between branches is done often and that way we can't lose work or introduce bugs by doing it manually. See also our contributing document.

Release process

The second part that changed is the release process itself. Based on the fact that we now have a stable branch that is always releasable the release process will become easier. The aim is to follow the good old open source concept of "Release early, release often".

One of the biggest changes is that we now follow the Semantic Versioning concept which is almost universal for open source software as well as for many closed source products.

Let me quote the core concept;

Given a version number MAJOR.MINOR.PATCH, increment the:

  1. MAJOR version when you make incompatible API changes,
  2. MINOR version when you add functionality in a backwards-compatible manner, and
  3. PATCH version when you make backwards-compatible bug fixes.

Additionally I wrote this in the release tag of v1.1.0

At first we wanted to follow the Core releases, but they have different core-values and push out hundreds of lines of new code in a bugfix release which feels dirty to have to inherit. The result of that is that the CSV soft fork code feature release has received a bigger version bump. And when that disconnect between Core and Classic versions it also stopped making sense to stay close to versioning of Core.
It was time to review the numbering scheme altogether.

When I talked about the "less than 1.0" versioning over beers with Eric S. Raymond (author of The Cathedral and the Bazaar) he explained it like this:
The idea was that when the app reaches the set of features required for the user with the lowest demands to use it for its intended usage, that's when you call it 1.0.

Bottom line, Bitcoin is certainly far past its 1.0 release. We will follow the Semantic Versioning from now on.

Continues Integration building

What I was really missing was a proper Continuous integration (CI) system. Naturally, there is the freely available Travis. But thats been a frustrating experience and really not useful as a build server. Apart from the fact that Travis is really slow (taking an hour or more), some and 60% of the time when it says there is a failure, it actually isn't a problem in our code, its really not useful to rely on.

I got a dedicated server in a secure environment and installed Teamcity on it. Teamcity is one of the most used CI software systems out there and quite useful for our purposes.

It does a couple of things;

  1. every single time a change is made in Git (in all branches) it builds this for all platforms and runs unit tests at least for the Linux ones. Notice that it does the building in a fraction of the time that Travis took. Typically a build completes in less than 15 min.
  2. It builds both using the 'gitian' concept of reproducible (or static) build environment for all platforms as well as using the native Linux way of using the platform dynamic-libraries (dlls). This makes sure we know it works on more than one version of boost and all the other supporting libraries.
  3. It builds nightly builds of the development branch allowing people to test these that don't want to compile.
  4. It pushes the build to remote builders (like Ubuntu's launchpad) to have an easy way to distribute nightly builds or maybe even future releases with minimum effort.
  5. It tests the Debian contrib to actually build properly.

As I mentioned, the server is in a secure location. What that means is that it is not connected to the internet and physically protected to make sure we can trust the output of that server. When we ship sources anyone can check quickly if they are unaltered from the tags in git. But with binaries (evil) changes can be much harder to detect. So I refuse to gpg-sign any releases unless I know they actually came from the exact sources that are in git, visible for all.

Ongoing work

There is plenty of issues that could use improvements. It turns out that the original developers don't really like writing unit tests and the code lacks any organization to make unit testing practical or easy. With some small changes such efforts could be made much easier.

We also have exactly one executable that runs all the test. Which is against the unit testing practices of having small units because one failure in one test can cause a lot of tests afterwards to start failing too, mostly because the memory states were not cleaned up properly after a failure.

Ideally we have a lot of small test executables which can be run by a simple runner. With the added benefit that you can run various in parallel (one per cpu-core) which makes the whole complete much faster as well.

We currently have a benchmark framework, but nothing in there. The idea of a benchmark is that code which is time critical can be tested in there and the output can be printed and plotted over a long time (months) to see that such time critical code doesn't get slower by unintended changes from release to release.
I find it hard to believe that Bitcoin has no time critical code, as such we should be able to find value to locate, test and keep code from getting slower.

Better integration Linux distro's. Bitcoin was made to be shipped as a big executable with all the libraries inside. The thinking was that this saves the application from misbehaving when another library changes behavior. While the idea seems logical, it shows a lack of experience because the open source community has been solving this problem for 30 years and it does so quite successfully. Going against that only introduces a different set of problems. For instance when the libc DNS exploit was made public Bitcoin dodged a bullet because libc is one of the very few libraries we don't actually compile in. But imagine any of the libraries we include has such a similar vulnerability and the only way to solve that is to re-build Bitcoin and get everyone to manually update. The Linux alternative of just running an apt-get upgrade or similar is vastly more secure and better supported.

As mentioned in the CI section, I already added support for Classic in launchpad. This may be something to expand upon and support more Linux distros in a similar fashion.


Bitcoin Classic has a strong Quality Assurance policy, adopts long time industry standard practices and sheds unsafe self-invented ones from upstream.

We also take all the best practices from Open Source and Free Software to make releases often for those that want to test the latest and contribute by reporting issues, or simply by being part of the network.

The future is bright!

On Mining and Wallets

Don't leave your money on the street.

Bitcoind is labelled as the backend of the network. When I explain this to friends I explain it like its the router that you have stuffed behind the sofa or if you are more professional, in a server room, sometimes not even in the same country you are in. Either way, it is something needed for everything else to work. But mostly ignored.

This is how I imagine the users of Bitcoin Classic to use the software, and it is the way I think it should be. They use it as something that just works to make it possible for them to keep their business or infrastructure running.

One such group of Bitcoin Classic users are the miners, they run bitcoin-classic to have an accurate view of the state of the network. The new transactions are collected in Bitcoin Classic and it creates new block templates for them. In short; Bitcoin Classic helps miners connect to the network.

What is totally out-of-character with the design that Bitcoin has had for years is that, when the miner actually mines the block that the block-reward is "stored" in the Bitcoin node.

This means that your Bitcoin Classic suddenly isn't just a piece of network infrastructure anymore. It holds thousands of dollars worth of value. And because of that you'd have to protect it. Which is not always compatible with the way that you want to use a piece of network infrastructure.

See, the original Bitcoin node that Satoshi made was a reference client and it included the ability to relay messages, it has a wallet and it knows how to mine blocks.

Over the years we stopped using the mining software because other software and hardware solutions have appeared that were much better.

We have also seen many better wallets which are much more used than a full node based wallet.

All of those still use the bitcoin full node software, like Bitcoin Classic, as a platform to build on. They communicate via various channels with the Bitcoin software which in turn connects them to the Bitcoin Network.

It is in my opinion time to separate Bitcoin Mining from the Bitcoin Wallet. We should no longer force Miners to use the Bitcoin wallet that is shipped in Bitcoin Classic. We should no longer force the horrible security practice of storing bitcoin private keys (and the money they represent) on a piece of equipment that really is meant to be like a router or a hub connecting your business to the Bitcoin network.

Stop demanding a wallet in a mining node

In Bitcoin Classic's development branch we have changed the system to allow mining on a node that does not have a wallet compiled in.

I introduced a new RPC command called createaddress, which returns something like this;

  "address": "1E852VpivAYpZcwGo5bNB9U4twjnJfrL2c",
  "private": "KyjwYTJrhAS14fV7fP16Z9bhiudpPcSTT5HpPgpoampS57zgT59w"

The private key is the one piece of information needed to later spent the money that would be stored on the address. To benefit from this change the miner would store this private key in a secure location.

The pubkey and the address are not needed to be stored securely, they can be used in future mined blocks. When a block is mined with the pubkey set, the money can later be redeemed using the safely stored private key.

A second new RPC command is setcoinbase. When called with the 'pubkey' field from above this will cause any following calls to getblocktemplate to be generated so when it is mined all the fees and block reward will go to the address.

People can use the output of createaddress, or if they don't like change, just use the existing getnewaddress and validateaddress rpc calls to create a coinbase that will end up in your bitcoind wallet.

Additionally the command-line option --gencoinbase is added to bitcoind which has the same effect as the setcoinbase RPC command and that may be useful to use until the mining software is upgraded to use these new RPC commands. Because calling getblocktemplate without setting the coinbase will now cause an error to be returned.

This change is still only available on the development branch, and has not been scheduled for release just yet. So there is still time to give feedback on what you like or dislike or would like to change.

In my opinion these changes will have the positive effect that miners can now feel much more safe when their bitcoind connects them to the bitcoin network, without needing all the security that a full wallet should have.

With Information Comes Understanding

The story I want to tell today is one of confusion.

Confusing translation. By dandownunder

I have been a software developer for various decades and in that time the way that I start a new job or a new task is similar. It is one of learning. When I started in a company that creates medical hardware I ended up spending quite some time reading through books meant for nurses. When I started in a company that makes a stock-trading platform I had to learn about the financial industry. The knowledge of being a software developer is similar to knowing English or Russian as a novel writer. It doesn't mean you have anything interesting to write. You need to learn.

This is why I love my profession of software developer. I get to do something completely different on a regular basis.

Learning about Bitcoin was surprising difficult. I've been learning for almost 4 years and I am certain I will continue for many more.
I would have to say that from all the industries that I studied, learning about Bitcoin details has been the hardest.

I have been talking to quite a lot of people over these years, from forums like reddit and 8btc to chat, email and VOIP talking to the experts. I found that I was not alone with my difficulty of discovering pertinent details about Bitcoin.

The following scenario would happen on a regular basis: A couple of random people on the internet are in a discussion about some detail. For instance they talk about the claim that miners would never willingly break the Bitcoin consensus rules.
After some time an expert comes in and resolves the conflict by stating some fact. In our example, he could say that after the 2012 halving there were various miners for some hours that continued to mine blocks with 50BTC reward.

I've always wanted to be so wise and learned that I knew all those facts in the hope that I could become an expert.

Until now.

In Bitcoin there are currently a very small number of experts. This generates a handful of problems. The most obvious problem is the one I outlined above, it makes it hard for new people to enter Bitcoin and become productive. New people need known experts to help them. Many other problems are more subtle.

Problems we face today:

  • With the experts having more information than the rest of us, those experts become authority figures. We need their Ok because we will likely do things wrong otherwise.

  • Bitcoin is a field that spans Economy, Finance, the history of Money, Psychology, various fields in software architecture (databases, peer-2-peer networking, cryptography) and probably more. It is impossible for any one person to be an expert in all of those. Yet, when we talk to our Bitcoin experts I never once heard any of them say that they would rather defer to someone more knowledgeable than themselves.

  • Discussions about Bitcoin can be about opinions or about well researched facts. In the current world it is impossible to differentiate fact from opinion because we can't independently validate those facts.

I have become convinced that the first big step we need to make to create a more healthy Bitcoin ecosystem is to make access to currently well hidden facts completely open for anyone to access and contribute to.

Wikimedia writes;

Imagine a world in which every single human being can freely share in the sum of all knowledge.

I think that goal is inspiration that Bitcoin could use very well. I would like to see a Free Wikipedia for Bitcoin technology.

There are quite a number websites today that have a small amount of information, typically on one topic for one specific group of users. For instance people new to Bitcoin. But nothing that combines those ideas and facts.

A special mention should go to the website which has a lot of in-depth information. Unfortunately most of it is hopelessly outdated and looking through the discussions it becomes obvious this is because it has been strictly guarded by a small number of people that would remove without discussion any opinions not already known to them. This kind of behavior is the death to cooperation.

The secondary goal, then, should be to create documentation that can not be censored or controlled.


Part of the solution; git.

Git is a tool used for many years by enthusiasts and professionals alike to create a distributed workflow. Anyone can create changes and offer them to the world to accept or reject based on merit only. Git allows anyone to start contributing without permission. It also removes the central ownership. No longer is it needed to convince one group of the worth of your changes, there may be various groups each creating what they think is the best version.

Lowering the threshold of entry.

While git allows the actual required working together, it is just a basic layer. Many end users don't want to use git, and that should be perfectly Ok.
Much like Wikipedia allows a user to do everything from a web browser, we need a way to do the same in order to not scare off people that could really contribute but don't have the technical knowledge to install and use tools on their machine.

In my search for a solution I found a tool called ikiwiki. This is a tool that combines the concept of a wiki and the concept of distributing control using git. What one person changes on the website can be merged with what another person makes at another time. Even when those people use a different website or a different team to do the work in.

Starting the revolution.

To start somewhere I present a website that has all the content (including all revisions back until its start in 2010) copied from what has so far been the main Bitcoin wiki. I put them online for now at This is a simple website and editing is currently disabled. The wiki database is converted to be markdown, which is a more modern alternative than the old wiki used. This is also the native format that ikiwiki uses. Don't worry, you likely are not going to have problems with it as you may already know it quite well. Markdown is used at many places like github and Reddit.

The is the current version, the complete history is found in git on github here

But even with the full historical content, I do believe we need some heros to step up and do some major work. Many pages are hopelessly outdated and there are quite a lot of flaws in the actual content.

Next to that, many pages use rendering-templates that have not been ported over to the git repository. I would suspect that most of them are in fact irrelevant. Some closer inspection is needed.

Be part of the solution, please help

In my own humble opinion the goal of the old wiki is wrong, it highlighted all companies in the space which included all gambling sites and places selling hardware. This is fast-changing information and frankly serves nobody but the companies. Likewise, pages about people feel out of place. It so quickly becomes about being better than anyone else. I don't have a wikipedia page and I think I'm better off without it.

Actual technical information, backgrounds, overviews and historical facts would be useful to write down. In my opinion it should be a source of information where fresh content is shared and research published.

We need people to go in and slice up the bad stuff, move pages to better locations and move out useless content. I have already spent too much time, time I could spend on writing better Bitcoin code. So I hope others can pick up the baton and fork my the bitcoinwiki project on github and work on making it ready for more and more people to come in add their work and content.

Following the title of this blogposts, sharing how Bitcoin works and sharing knowledge about many of its darker corners will allow the conversation to shift back into a less emotional one. It is always easier to discuss topics when opinions are not confused with facts and facts are not fought with opinions.