What is new in Bitcoin Classic (2017/02/20)
Feb 20, 2017

This post will answer what happened in Bitcoin Classic since the last report?

## New stuff

In an continuing push to make the Bitcoin Classic client the best one to use on Linux we have seen some support added some time ago for the XDG Base Directory Spec for config files. Continuing in that direction Contributor Stephen McCarthy has been hard at work with the command-line parser code.

The code we inherited when we started Classic is shared between various projects, in all of those a set of problems are present which make sysadmins and operators life a lot more difficult.

• Passing in a not-existing config option or command line argument was silently ignored. So typos would go undetected. The most common problem was that when you added a - to a config file option you would have your option silently ignored.

• Flags usage was very inconsistent and not very predictable. The only solution that consistently worked was '-flag=1', but sometimes -flag works too and the most fun problem was that '-flag=true' would more often than not end up turning off the flag.

• Inputs were not validated at all. If you passed something that wasn't numbers to a option that expected numbers, the effect typically was that the option would be set to zero. At best you might get a warning in the logs.

In the last weeks, Bitcoin Classic contributor Stephen McCarthy fixed all those issues and many small ones as well. If any command-line option is passed in that doesn't exist, or is expecting a number and you passed something else, you will get an error and your node won't start. Additionally, yes/no flags become more intuitive and will work properly with any of 0/1, false/true and also with yes/no.

Pull requests; 225 226 227 228

Tom Zander committed a small change to prepare for a world where block-size is no longer a centralized rule, allowing for a healthier network; Punish nodes relatively for oversize blocks commit.

Tom Zander added a new feature for Flexible Transactions called -ft-strict. The commit message reads:

In Flexible Transactions we have undefined tokens which are ignored by current software if present. This allows nodes to stay compatible when new tokens are invented. For miners this policy rule should likely be turned on so transactions that use tokens that are not yet defined will be rejected.

Tom Zander imported the "Expedited forwarding" technology to Classic's master branch commit.

## Code cleanups / improvements;

Tom Zander created a new class which now becomes responsible for all orphan transaction handling. This is one step towards the problem of having no code structur and everything in one big 5000 line sourcefile. This a non-functional change, making the code better to maintain and easier to understand. commit.

Tom Zander noticed a potential bug, or at least a bad design in the usage of the assert command. This is better explained in the resulting 'Junior Job' issue. The fix is in commit.

Tom Zander did a refactor of the blocks database handling code. Similar to the orphan code above this is a non-functional change that moves code to be easier to understand and maintain. commit, commit, commit.

Amaury Sechet created a new PR which updates the 3rd party library secp256k1 to a newer version. We also realised that it is confusing for many that imported libraries are placed in the src directory as that makes the fact that they are snapshots and developed elsewhere not very obvious. But no solution has appeared at this time.

## GUI (Qt)

A PR by Stephen McCarthy was merged and had the purpose to Fix assertion error on -disablewallet, fixing a regression introduced by another dev just a couple of days before.

Another PR by Stephen McCarthy was merged and introduced a change that made 'Abort' the default button on the dialog that is very destructive, and a user input mistake there could cause many hours of work to recover from.

Tom Zander created a commit with the subject Make splashscreen work on hi-DPI displays commit.

## Misc

New contributor Benjamin Arntzen created a PR to fix some text which ended up being applied. also to the website as well.

Tom Zander imported the historical release notes on the website; https://bitcoinclassic.com/news/

If you are interested in the Bitcoin Classic project, please look at our community page for how to get involved.

Posted
UTXO-Growth
Feb 16, 2017

Over the last year we have seen quite a lot of people that get worried about the UTXO growth. Lets find out why.

For those catching up; the UTXO is the Unspent Transaction Output database. The essential database that is needed to find out if a new incoming transaction is actually spending money that exists.

The growth is explained well in this blog from Gavin Andresen, the blog is already older, so the growth has continued.

There are some reasons why people claim this is an issue. Lets address each of them in turn.

### The UTXO needs to be in memory completely. It will grow faster than internal memory can.

Answer: The implementation in all Bitcoin full nodes is that the entire database is stored on disk. With a smart memory-cache to speed up lookups. The memory-cache is configurable and the default is 300MB. (it's the dbcache config option) The entire database is significantly larger than 300MB, as the above blog states.

The claim that it has to be in memory is weird because none of the full node implementations actually do that. Sure, miners can adjust their settings to allow a bigger cache, and be different from everyone else. But this doesn't mean the UTXO should stop growing.

### Bigger blocks make the UTXO grow faster!

Answer: The UTXO is a database of where money is stored. With more people using and holding Bitcoin, this will grow. The UTXO will not just grow based on more transactions, the UTXO will grow because we get more customers. Which can only be done when we have bigger blocks.

So the argument isn't directly false, it is misleading. It is missing the fact that UTXO growth is based on more people using Bitcoin. Maybe the people making this accusation are aware that saying they do not want more customers will make them look stupid.

### UTXO on disk will make miners lose money because of slow block validation.

Answer: This may have been true in the past, but is no longer true since we introduced xthin blocks. This technology allows any node to communicate a new block with only the header and some meta-data. The receiving node will then use the transactions that are in his memory pool to reconstruct the block.

The direct result of this is that when a new block is mined miners will not validate the vast majority of transactions again because the transactions in their mempool were already validated. This means that there is no bottleneck any more for miners using xthin (available in BU and Classic).

Bottom line is; it's a database. MySql/PostgreSQL/MongoDB etc. have had decades to perfect database technology. Being scared of a database growing on today's hardware is unfounded.

Posted
What is new in Bitcoin Classic (2017/01/30)
Jan 30, 2017

Bitcoin Classic is a typical open source project where individuals come together in order to join forces and create the best solution they can. It can be a bit chaotic to keep track of what is going on and for this reason I want to write a little update on what has been going on in Bitcoin Classic.

There have been two similar posts in another forum before: 10-jan & 18-Jan.

A very brief recap for those that have not followed along. Bitcoin Classic got a restart that I introduced with Classic is Back here 2 months ago. We released version 1.2 some weeks ago that introduced various new technologies. The CMF bindings project got its release shorty after.

Last week I spend most of the time in a plane or in the airport going to from the EU to Mexico where the Satoshi Roundtable III was. I very much enjoyed meeting many Bitcoin friendly faces and having discussions about how we can further Bitcoin and Bitcoin Classic. At the same time this is my apology for having much less coding based progress.

• We have seen bugfixes in the 1.2 stable branch (to become the 1.2.1 release).

• [QT] Bugfix: ensure tray icon menu is not empty. By Stephen McCarthy.
• Fix failing maxuploadtarget.py QA test from the -extended set. By Stephen McCarthy.
• Make configure fail on old Boost. (1.55 has been the minimum for some time)
• On the 'master' line of coding (to become the 1.3 version) we have seen various improvements as well.

• We merged a PR written by Pat O'Brien that sanitises the default permissions of files written by the node on Unix. Adding code to make sure that the wallet.dat is only ever visible to the main user.
• Merge a PR from Søren B.C. to remove some dead code (semicolons) from many files.
• Merge various PRs from Amaury Sechet for backports and to add sublime text project file to gitignore
• Merge a PR from Stephen McCarthy that makes sure that any unrecognised command line arguments or config-file lines will cause a warning on startup. This should make it much easier to configure the application.
• Cleanup the NetworkStyle usage. This makes code less fragile and more coherent.
• Remove duplicate (copy/pasted) code originally imported from the xthin project.

If you are interested in the Bitcoin Classic project, please look at our community page for how to get involved.

Posted
Governance Starts Somewhere
Jan 11, 2017

Last week I've been at the Satoshi Roundtable III, which was very effective in bringing about 100 Bitcoin interesting people together in a way that makes it very easy to productive to just sit down on one of the many sofas (or even on the beach) and talk with a number of people that share your interests.

To link to someone that is a much better writer than I am, Rick Falkvinge wrote "Overall, my assessment is that bitcoin is lacking project management".

I fully agree on that, in Bitcoin we now have different groups creating their own solutions and then trying to push them to the rest of the world, typically without actually listening to criticism, and then being surprised when the world rejects it.

I re-read a novel last month (Moving Mars) that was a science fiction where some 20 large families live on Mars as separate settlements. They started to create a new, global, government. To my surprise the process to get there was as step 1 to decide on a way to make decisions. And only after that create a constitution.

If we want to have any hope of so many smart people making progress again, we need to stop trying to skip step 1. We need to allow people to talk to each other and start the conversation on how to address the concern Rick formulated on his blog.

In Bitcoin we used to have a solution for that, the bitcoin-dev mailinglist. This has been a failed experiment with many examples of lost opportunities and even a group like Bitcoin Unlimited just refusing to use it because it is not neutral. The secondary effect is that this same group has refused to make BIPs (Bitcoin Improvement Proposal) because of the place where they are proposed and discussed is that same list they won't join.

This week I reached out to the Bitcoin Unlimited team on their slack in order to get their input on starting a new mailinglist under new management that can accomplish the goal of cross-team communication. We had some good ideas, one of them was to call it the 'post-1MB planning' list.

Specifically, what am I thinking about?
If anyone has any changes they want to make to the protocol as that affects all implementations this should have a neutral (not affiliated to any specific implementation) place where this can be discussed.

• In Bitcoin we currently are having problems with the blocksize-limit of 1MB. This is a difficult topic because the parameter is part of the protocol. A "Consensus rule". What is interesting is that there currently is no bitcoin-wide document how the consensus rules are actually supposed to be changed to support Bitcoin Classic or Unlimited. This needs to change.

• There is no place where people that develop a wallet or other bitcoin infrastructure can talk about consensus changes, and get familiar with what others propose.

• Classic recently published its long term (5 year) plan. It would be great to be able to talk about this on a place where multiple implementations can work together so we can all makes sure we work in the same direction and avoid nasty surprises when other groups went a different (maybe better) direction!

With the modern web I've seen people suggest websites like forums to discuss this. The problem with a forum or site is that it excludes people that are not able to reach it, and it opens the risk that the operator of the site may go offline with all the history and the content.

Traditionally the best way to do this has been email lists. Thousands of projects use it successfully. I would like to suggest the best way forward is doing that again. But I'm open to better suggestions if they don't introduce any problems.

This is a question to all the software developers and organisations that want to be involved in the way that Bitcoin evolves over time, what are your requirements and what can you offer for such a platform of coordination to move Bitcoin forward?

Posted
Bitcoin-the-Long-View
Jan 12, 2017

Bitcoin: the long view.

Bitcoin is an amazing idea and has been running with great success for 8 years. It is far from perfect, though. Many issues will take time to rectify in a good and responsible manner. I want to share my longer time engineering goals for the Bitcoin full node which direct my priorities and you have been able to see some of these already in the Bitcoin Classic full node which released its 1.2 version last week.

### 1 Protocol documentation

Bitcoin as a whole is something we often call a protocol. But unlike most protocols, there is preciously little documentation out there that describes in detail what actually goes over the wire.

The first goal is to move towards a Bitcoin that is fully documented. The point here is that the documentation of the protocol is to be 'leading'. So if there are two implementations that disagree, the protocol-documentation is the one that is right. This avoids fluffy arguments like who has the biggest market share or who has been around longest as those somehow being more right.

### 2 Backwards compatibility of protocol

Parts of the current Bitcoin protocol were designed without keeping in mind industry best practices. For many parts this is not a big deal, but some those best practices were there for a very good reason. A good example is that practically all of the data structures in the Bitcoin protocol are not changeable. It is impossible to add a value to a p2p message, you can't remove an unused variable that is stored in each and every transaction.

The second goal is to move towards tagged protocol data-structures. The idea of tagged data goes back decades, well before Bitcoin was created. The point here is that we know mistakes have been made and will continue to be made by humans extending and fixing Bitcoin. For this reason we need the ability to cleanly make backwards compatible changes. Adding a new field in an existing p2p message is much cleaner than having to create an entire new message-type with all the same info, plus one item.

Note; The basic concepts of Bitcoin are sane and sound, we should not change those!

Bitcoin as an industry depends on the blockchain as the universal database that everyone shares and uses. The main property of a database is that it can provide fast access to the information you seek. A normal database would be able to return all transactions since a certain date, as a quick example.

Unfortunately the blockchain in any full node has data-access methods that are quite primitive and very very slow, making the blockchain essentially private data. This means that block explorers end up having to re-create a full database. Research of usage patterns and many properties are limited to a few that have plenty of patience.

The third goal is to move towards a full node that provides full access to Bitcoin, including its database. The simple access of the raw data is something that can be made substantially faster and it opens the node to being much more useful for a large range of features.

This blog originated from the classic long term roadmap which has a second section going into more detail on each of those goals.

## Join the Bitcoin Classic community

Do these goals align with what you are looking for in Bitcoin? Please consider joining us. Running the client, sending an email when you find a typo or simply sharing your story on reddit are great ways to start. Read the Classic community page for many more ways to join this exciting movement.

Posted Tags:
Jan 11, 2017

Bitcoin Classic is about 1 year old, so let’s take a quick look back.

When Bitcoin Classic was founded, our goal was to work with the established Bitcoin development teams and grow Bitcoin on-chain. The obvious and only factor we were focused on was the limit on the size of blocks. Bitcoin Classic compromised - deeply. We went for the most conservative possible increase in block size. From 1MB to 2MB.

The response we triggered, within days of the release was shocking. Several miners, developers and the BlockStream company president agreed that Bitcoin Classic was not allowed to compete with the bitcoin establishment, and they made their proclamations publicly and openly.

In the following months the implications of this action began to sink in. Those who want Bitcoin to grow to allow more users realised compromise with the establishment was a mistake. After a year of working together, compromises and peace offerings, the establishment didn't just reject all of them, it actually rejected the idea of decentralisation and open discussion.

In the latest version of Bitcoin Classic we no longer compromise. The user is at the forefront and we have listened to the many voices which, today find the system to be slow and expensive.

The solution, included in Bitcoin Classic 1.2, takes the power of setting the maximum block size away from the “establishment”, those who have shown they cannot be trusted to represent the bitcoin community.

They have shown they are not tolerant of open discussions nor educated disagreement, and these established developers have abused their first-mover position, to centralise decision making power in Bitcoin, the first decentralized P2P currency.

Power in Bitcoin is decentralized by design.

Centralized decision making is successful when people trust and support the software organization, without the need for questioning the established group of developers. However, the true power lies with all of the individual participants in the network. Only for as long as individuals keep running centralized software, using centralized services and tolerating centralized solutions will any established group have power over Bitcoin.

The solution for the block size debate is to remove the ability of software developers to set a maximum block size limit. Instead, give the power of this limit to the people. The limit can now be set by the full node operator, the exchanges, the online wallets and the home users.

Because with Bitcoin we can route around centralised decision making. Bitcoin Classic joins Bitcoin Unlimited to give people the tools to do so. People running one of these Bitcoin clients no longer have a block size limit.

But make no mistake - this is not a fight between software groups.

To be clear: because Bitcoin is decentralised in all aspects, we all decide what Bitcoin is, by running code and participating in the system. If we want to change something, we chose different code and when the collective joins you in your choice, Bitcoin changes - for the better.

Every participant makes their own choice following their own self best interest within the collective system. There is nothing in the code or system that prevents this from happening. In fact it is deeply embedded in Bitcoin as a dynamic system that evolves and grows without any centralized decision making.

You have to stand up for your opinion because nobody else can.

People like you can make their voice heard by doing something as simple as running Bitcoin Classic. You can also find other ways to make your voice heard by writing to companies that are undecided and explain your position or simply link to this text.

Bitcoin Classic works for the people, and if people act to maintain Bitcoin as a decentralized project, the best will happen.

Posted
Blocksize Consensus

As we near the closing of 2016 (year of the Monkey) there is a growing awareness on how we can solve the question of block size limits for many years to come. Please see here for an overview of this.

The idea is that we can move from a centrally dictated limit to one where the various market participants can again set the size in an open manner. The benefit is clear, we remove any chance of non-market participants having control over something as influential as the block size.

A good way to look at this is that the only ones that can now decide on the size are the ones that have invested significant value, believing in an outcome that is best for Bitcoin as a whole.

The credit goes to Andrew Stone for coming up with this concept of building consensus. He also published another, intertwined, idea which is the Acceptable Depth (AD) concept.

If the open market for block sizes is still making people uncomfortable, the AD concept is causing pain. The concept has had many detractors and the way to handle complaints and attack-scenarios has been to add more complexity to it. The introduction of something called the "sticky gate" that was created with no peer feedback is a good example as there is now an official request created by stakeholders which asks for it to be removed again. The result of that request was for the author to not remove it, but suggest yet more complexity.

I really like the initial idea where the market decides on the block size. This is implemented in Bitcoin Classic and I fully support it.

The additional idea of "Acceptable Depth (AD)" was having too many problems so I set out to find a different solution. This part of Andrew's solution is not a consensus parameter. Any client can have their own solution without harming compatibility between clients.

## What is the problem?

In the Bitcoin reality where any full node can determine the blocksize limits we expect a slowly growing block size based on how many people are creating fee-paying transactions.

A user running a full node may have left his block size limit a little below the size that miners currently create. If such a larger-than-limit block comes in, it will be rejected. Any blocks extending that chain will also be rejected and this means the node will not recover until the user adjusts the size limits configuration.

It would be preferable to have a slightly more flexible view of consensus where we still punish blocks that are outside one of our limits, but not to the extend that we reject reality if the miners disagree with us. Or, in other words, we honour that the miners decide the block size.

In the original "AD" suggestion this was done by simply suggesting we need 5 blocks on a chain when the first is over our limits. After that amount of blocks has been seen to extend the too-big one, we have to accept that the rest of the world disagrees with our limits.

This causes a number of issues;

• It ignores the actual size of the violating block. It is treated the same if its 1 byte too large or if its various megabytes too large. This opens up attacks from malicious miners.
• There is no history kept of violating blocks. We look at individual violating blocks in isolation, making bypassing the rules easier.

## Solving it simpler

Here I propose an much simpler solution that I have been researching for a month and to better understand the effects I wrote a simulation application to see the effect of many different scenarios.

I introduce two rules;

1. Any block violating local limits needs at least one block on top before it can be added to the chain.

2. Calculate a punishment-score based on how much this block violates our limits. The more a block goes over our limit, the higher the punishment.

### How does this work?

In Bitcoin any block already has a score (GetBockProof()). This score is based on the amount of work that went into the creation of the block. We additionally add all the blocks scores to get a chain-score.

This is how Bitcoin currently decides which chain is the main chain: the one with the highest chain-score.

Should a block come in that is over our block size limit, we calculate a punishment. The punishment is a multiplier against the proof of work of that block. A block that is 10% over the limit gets a punishment score of 1.5. The effect of adding this block to a chain is that it adds the blocks POW, then subtracts 1.5 times that again from the chain. With the result that the addition of this block removes 50% of the latest blocks proof of work value from that chain.

The node will thus take as the 'main' chain-tip the block from before the oversized one was added. But in this example, if a miner appends 1 correct block on top of this oversized one, that will again be the main chain.

Should a block come in that is much more over size limit, the amount of blocks miners have to build on top of it before the large block is accepted is much larger. A 50% over limit block takes additional 6 blocks.

The effect of this idea can be explained simply;

First, we make the system more analogue instead of black/white. A node that has to chose between two blocks that are both oversized means we choose the smallest one.

Second, we can make the system much more responsive where a block that only violates our rules a tiny little bit at height N and then if some other miner adds a block N+1, this means we can continue mining already at block N+2. On the other hand, an excessive block takes various more blocks on top of it to get accepted by us. So its still safe.

Last, we remove any attack vectors where the attacker creates good blocks to mask the bad ones. This is no longer possible because the negative score that a too big block creates will stay with this chain.

## Visualising it

This proposal is about a node recovering when consensus has not been reached. So I feel OK about introducing some magic numbers and a curve that practice shows a good result in various use-cases. This can always be changed in a later update because this is a node-local decision.

On the horizontal axis we have the block size. If the size is below our limit, there is no punishment. As soon as it gets over the block limit, there is a jump and we institute a punishment of 50%. This means that it only adds 50% of the proof-of-work score of the new block.

If the block size goes more over limit, this punishment grows very fast taking 6 blocks to accept a block that is 150% of our configured block size limit.

The punishment is based on a percentage of the block size limit itself, which ensures this scales up nicely when Bitcoin grows its acceptable block size. For example a 2.2MB block on a 2MB limit is 10% punishment. We add a factor and an offset making the formula a simple factor * punishment + 0.5.

The default factor shown in the graph is 10, but this is something that user can override should the network need this.

# Conclusion

Using a simple-to-calculate punishment and the knowledge that Bitcoin already calculates a score for the chain based on the proof-of-work of each block, we can create a small and simple way to find out how much the rest of the world disagrees with our local world-view. In analogue, real world measurements.

Nodes can treat the size limits as a soft limit which can be overruled by mining hash power adding more proof of work to a chain. The larger the difference in block size and limit, the more proof of work it takes.

Miners will try to mine more correct chains, inside of the limits, fixing the problem over time, but won't waste too much hash power pushing out one slightly oversized block in a chain.

This is not meant as a permanent solution for nodes, and full node operators are not excused from updating their limits. On the other hand a node that is not maintained regularly will still be following Bitcoin, but it will trail behind the main chain by a block.

Currently we only have block-size as a relevant variable. If we add more variables that are taken out of the consensus rules and made market-set, we can decide or make configurable a proportional system. So, for instance, if we add sigop counts, those violations may only count for 30% of the punishment whereas blocksize overdraft account for 70% of the punishment. The actual punishment is still based on the actual overdraft in an analogue (super) linear fashion.

Posted
Classic is Back

Bitcoin Classic was started as a response to the market need for bigger blocks and an alternative developer group to provide it, as primary and first way to get Bitcoin to scale to the next million. Since that first release in February we have come a long way. The Bitcoin landscape has changed, and mostly for the better. This is an accomplishment a lot of us can be proud of.

Our two main goals were to have more distributed development and to have bigger blocks. We are a lot closer to both ideals, but there is more work to be done.

I personally started this journey some 3 years ago. I tried the code that was Bitcoin Core and wanted to improve upon some things that annoyed me. Now 3 years later I have seen every file and most of the code of what is in Bitcoin. It is a very rewarding problem to study and can occupie one for years.

My primary role in Bitcoin Classic is one of release manager. This essentially means I take the responsibility to get the highest quality releases. Next to that I love to innovate and write code. I've been doing a lot of that over the last months.

So, why title this blog "Classic is back"? The truth is, it never really went away. I think its fair to say that Classic stepped aside when the 2MB band-aid solution we shipped caused a lot of market response and discussion. But Classic has always been there. Busy, coding, improving things.

One of the changes we have seen in the market is a change in understanding how all Bitcoin software can work together. This idea has been used many times before, so has stood the test of time already. It is about how we can have a limit without setting a limit in the software.
All software, when it is immature and only targets a small amount of users, has limits build in. These limits are really there because the author learned that his software is not good enough to do more and a limit is by far the easiest 'solution'.

This design isn't uncommon, the older ones among us may remember that Ms DOS had a memory size of 640kB. Emails 20 years ago had a maximum size of 1MB.

As part of growing up of that software; better solutions were found that made those limits obsolete. The limits were removed and people were happy.

Removing of the limits in software doesn't make them unlimited. It just makes other costs limit the actual size. You still can't send a DVD sized email. The maximum size now is determined by the market. A basic calculation of cost and benefit.

Lets take a look at some market incentives that a Bitcoin Miner has;

• A miner mining bigger blocks are nice because he gets to pick up more fees.
• On the other hand smaller blocks are nice to keep a bit of scarcity and have people still paying fees.
• Also, bigger blocks cost more time to send and make the chance of orphans higher, both of which are costly.
• And on the other hand, smaller blocks cause a bigger backlog and as a natural result, bad service to paying customers. We want to avoid losing them to the competition.

These are conflicting incentives. There are both reasons for bigger and there are reasons for smaller blocks. The end result will be, that blocks will have a size set based on the market demand for space in those blocks. Simply because that is the most profitable for miners to do. The good thing here is that the best situation for miners is a pretty good situation for everyone else in Bitcoin as well.

What Andrew Stone came up with is that everyone publishes the maximum size they are willing to accept and this makes the market of block sizes become an open market. Now miners can decide what the ideal block size is, without fearing that their blocks will be rejected by other miners for being too large.

A purely market driven block size is a big change for Bitcoin and it may take a bit more time before everyone gets used to it, and in the mean time Classic is working on several next-generation projects.

## Network manager / Admin server

In Classic I started a project that is the Network Manager, a replacement network layer which solves many of the problems we have with the p2p code we have inherited from the Satoshi client. On top of this is currently being build an admin server. The admin server will fill the role of remote control. You can create transactions, query the block chain, sign data and ask it to make coffee. All but the last are already present and this admin server just uses that existing functionality from the 'Bitcoin-cli' app.

The place where it shines is that the speed is various magnitudes faster and it allows not just asking questions like a webserver, but the admin server has a live connection and it pushes out data. For instance when a new block comes in.

So it enabled a new set of network enabled tools that can be created with it. Some examples where its going to be used are; * Application management software which monitors any number of nodes under its control for warnings and slowness. But also for operational statistics, like how many mega-bytes have been sent. * Scientific research to query the entire blockchain without having to store it locally. * much faster uploads of big blocks from a miner to the full node.

This project is nearing 80% completion. Is Classic back on track?

## Flexible Transactions (BIP134)

In the original Bitcoin 0.1 release Satoshi Nakamoto included a design that is a very good indication of his intentions for the future. He included a version field for transactions. Today, 8 years after the fact, we are still using the exact same format that was introduced as version 1 all those years ago. We are not really using this version field that Satoshi provided.

There are plenty of problems we have in Bitcoins design which would benefit immensely from actually being able to make fixes in the transaction format. And flexible transactions is a protocol upgrade that is meant to make that happen.

The Flexible Transaction proposal has been under development for a couple of months now and we currently solve quite a lot of issues in this first version.

### Fixes Bitcoin issues;

• Malleability
• Linear scaling of signature checking
• Hardware wallet support (proofs)
• Very flexible future extensibility
• Double spend proofs
• Makes transactions smaller
• Supports the Lightning Network
• Support future Scripting version increase

### Where are we now?

Flexible Transactions is running on a testnet. The functionality is 80% done. We have some more tests to run.

Bitcoin Classic is definitely back.

## Developer friendly

Bitcoin Classic is based on the 8 years old Satoshi codebase, I myself have been spending more time than expected learning this codebase and I know from other developers I'm definitely not alone. The codebase is not even that large! The most often heard complaint is that that it lacks a modular design.

Hard to understand code leads to lower quality products. This is very simple to understand: programmers are just human and they make mistakes when they have to create clean code in a dirty environment. I think it makes a lot of sense to have a focus on improving the architecture and the code quality of Bitcoin Classic. The goal is to help developers understand the code faster and at the same time help our products to be of higher quality.

Because there is such a lack of modular design, the best way to start fixing this is to introduce this modular design. Basics like an "Application" class as a starting point for application-wide resources have been coded and more is coming.

The various products I mentioned today are also all based on common and reusable technologies. The introduction of a common message-format will be the common layer between the Flexible Transactions and the Network Manager and the Admin Server, as a quick example.

I feel its very important to work on all these improvements together with other teams that are part of Bitcoin. They have their own improvements and in the end we want to reuse each others innovations and, ultimately, code. I have had many good conversations with people like Pedro Pinheiro, Dagur Valberg Johansson (dagurval), Amaury Sechet (deadalNix), Andrea Suisani (sickpig), Freetrader and Justus Ranvier whom are great people to work with and cover the majority of the Bitcoin implementations. There is a lot of overlap in what we do and how we think.

The important part is the differences. Each Bitcoin Client is unique in its own way and this is useful for everyone. This helps to avoid the echo-chamber problem and we avoid killing ideas that are before their time. More people competing on a friendly basis will benefit everyone and that is what it means to have many implementations of Bitcoin.

Bitcoin Classic is back. Bitcoin Classic is working on some pretty exciting things and we have all come a long way.

Posted
Flexible Transactions

Update: This post made when FlexTrans was started. It is now mostly finished and got its own home on bitcoinclassic.com where more info can be found.

I've been asked one question quite regularly and recently with more force. The question is about Segregated Witness and specifically what a hard fork based version would look like.

Segregated Witness (or SegWit for short) is complex. It tries to solve quite a lot of completely different and not related issues and it tries to do this in a backwards compatible manner. Not a small feat!

So, what exactly does SegWit try to solve? We can find info of that in the benefits document.

• Malleability Fixes
• Linear scaling of sighash operations
• Signing of input values
• Increased security for multisig via pay-to-script-hash (P2SH)
• Script versioning
• Reducing UTXO growth
• Compact fraud proofs

As mentioned above, SegWit tries to solve these problems in a backwards compatible way. This requirement is there only because the authors of SegWit set themselves this requirement. They set this because they wished to use a softfork to roll out this protocol upgrade. This post is going to attempt to answer the question if that is indeed the best way of solving these problems.

Starting with Malleability, the problem is that a transaction between being created by the owner of the funds and being mined in a block is possible to change in such a way that it still is valid, but the transaction identifier (TX-id) has been changed. But before we dive into the deep, lets have some general look at the transaction data first.

If we look at a Transaction as it is today, we notice some issues.

 Version 4 bytes Number of inputs VarInt (between 1 and 9 bytes) inputs Prev transaction hash 32 bytes. Stored in reverse Prev transaction index 4 bytes TX-in script length Compact-int TX-in script This is the witness data Sequence-no/CSV 4 bytes Number of outputs VarInt (between 1 and 9 byte) outputs Value Var int TX-out script length Compact-int TX-out script bytearray NLockTime 4 bytes

The original transaction format as designed by Satoshi Nakamoto opens with a version field. This design approach is common in the industry and the way that this is used is that a new version is defined whenever any field in the data structure needs changing. In Bitcoin we have not yet done this and we are still at version 1.

What Bitcoin has done instead is make small, semi backwards-compatible, changes. For instance the CHECKSEQUENCEVERIFY feature repurposes the sequence field as a way to add data that would not break old clients. Incidentally, this specific change (described in BIP68) is not backwards compatible in the main clients as it depends on a transaction version number being greater than 1, they all check for Standard transactions and say that only version 1 is standard.

The design of having a version number implies that the designer wanted to use hard forks for changes. Any new datastructure requires software that parses it to be updated. So it requires a new client to know how to parse a newly designed data structure. The idea is to change the version number and so older clients would know they should not try to parse this new transaction version. To keep operating, everyone would have to upgrade to a client that supports this new transaction version. This design was known and accepted when Satoshi introduced Bitcoin, and we can conclude his goal was to use hard forks to upgrade Bitcoin. Otherwise there would not be any version numbers.

Lets look at why we would want to change the version; I marked some items in red that are confusing. Most specifically is that numbers are stored in 3 different, incompatible formats in transactions. Not really great and certainly a source of bugs.

Transactions are cryptographically signed by the owner of the coin so others can validate that he is actually allowed to move the coins. The signature is stored in the TX-in-script.
Crypto-geeks may have noticed something weird that goes against any textbooks knowledge. Texbook knowledge dictates that a digital signature has to be placed outside of the thing it signs. This is because a digital signature protects against changes. After generating the signature you would be modifiying the transaction if you insert it somewhere in the middle and that mean you'd have to regenerate the signature because the transaction changed..

Bitcoin's creator did something smart with how transactions are actually signed so the signature actually doesn't have to be outside the transaction. It works. Mostly. But we want it to work flawlessly because currently this being too smart causes the dreaded malleability issues where people have been known to lose money.

SegWit actually solves only one of these items. It moves the signature out of the transaction. SegWit doesn't fix any of the other problems in Bitcoin transactions, it also doesn't change the version after making the transaction's-meaning essentially unable to be understood by old clients.

Old clients will stop being able to check the SegWit type of transaction, because the authors of SegWit made it so that SegWit transactions just have a sticker of "All-Ok" on the car while moving the real data to the trailer, knowing that the old clients will ignore the trailer.

SegWit wants to keep the data-structure of the transaction unchanged and it tries to fix the data structure of the transaction. This causes friction as you can't do both at the same time, so there will be a non-ideal situation and hacks are to be expected.

The problem, then, is that SegWit introduces more technical debt, a term software developers use to say the system-design isn't done and needs significant more work. And the term 'debt' is accurate as over time everyone that uses transactions will have to understand the defects to work with this properly. Which is quite similar to paying interest.

Using a Soft fork means old clients will stop being able to validate transactions, or even parses them fully. But these old clients are themselves convinced they are doing full validation.

## Can we improve on that?

I want to suggest a way to one-time change the data-structure of the transaction so it becomes much more future-proof and fix the issues it gained over time as well. Including the malleability issue. It turns out that this new data-structure makes all the other issues that SegWit fixes quite trivial to fix.

I'm going to propose an upgrade I called;

Flexible Transactions

Last weekend I wrote a little app (sources here) that reads a transaction and then writes it out in a new format I've designed for Bitcoin. Its based on ideas I've used for some time in other projects as well, but this is the first open source version.

The basic idea is to change the transaction to be much more like modern systems like JSON, HTML and XML. Its a 'tag' based format and has various advantages over the closed binary-blob format.
For instance if you add a new field, much like tags in HTML, your old browser will just ignore that field making it backwards compatible and friendly to future upgrades.

• Solving the malleability problem becomes trivial.
• We solve the quadratic hashing issue.
• tag based systems allow you to skip writing of unused or default values.
• Since we are changing things anyway, we can default to use only var-int encoded data instead of having 3 different types in transactions.
• Adding a new tag later, (for instance ScriptVersion) is easy and doesn't require further changes to the transaction data structure. All old clients can still make sense of all the known data.
• The actual transaction turns out to be about 3% shorter average (calculated over 200K transactions)
• Where SegWit adds a huge amount of technical debt, my Flexible Transactions proposal instead amortizes a good chunk of technical debt.

An average Flexible Transaction will look like this;

 TxStart (Version) 0x04 TX-ID data inputs TX-ID I try to spend 1 + 32 bytes Index in prev TX-ID varint outputs TX-out Value (in Satoshis) VarInt TX-out script bytearray inputs TX-in-script (Witness data) bytearray WID-data TxEnd 0x2C

Notice how the not used tags are skipped. The NLockTime and the Sequence were not used in this example, so in that case they would not take any space.

The Flexible Transaction proposal uses a list of tags. Like JSON; "Name:" "Value". Which makes the content very flexible and extensible. Just instead of using text, Flexible Transactions use a binary format.

The biggest change here is that the TX-in-script (which segwit calls the witness data) is moved to be at the end of the transaction. When a wallet generates this new type of transaction they will append the witness data at the end but the transaction ID is calculated by hashing the data that ends before the witness data.

The witness data typically contains a public key as well as a signature. In the Flexible Transactions proposal the signature is made by signing exactly the same set of data as is being hashed to generate the TX-input. Thereby solving the malleability issue. If someone would change the transaction, it would invalidate the signature.

I took 187000 recent transactions and checked what this change would do to the size of a transaction with my test app I linked to above.

• Transactions went from a average size of 1712 bytes to 1660 bytes and a median size of 333 to 318 bytes.
• Transactions can be pruned (removing of signatures) after they have been confirmed. Then the size goes down to an average of 450 bytes or a median of 101 bytes
• In contrary to SegWit new transactions get smaller for all clients with this upgrade.
• Transactions, and blocks, where signatures are removed can expect up to 75% reduction in size.

## Broken OP_CHECKSIG scripting instruction

To actually fix the malleability issues at its source we need to fix this instruction. But we can't change the original until we decide to make a version 2 of the Script language.
This change is not really a good trigger to do a version two, and it would be madness to do that at the same time as we roll out a new format of the transaction itself. (too many changes at the same time is bound to cause issues)

This means that in order to make the Flexible Transaction proposal actually work we need to use one of the NOP codes unused in Script right now and make it do essentially the same as OP_CHECKSIG, but instead of using the overly complicated manner of deciding what it will sign, we just define it to sign exactly the same area of the transaction that we also use to create the TX-ID. (see right most column in the above table)

This new opcode should be relatively easy to code and it becomes really easy to clean up the scripting issues we introduce in a future version of script.

## So, how does this compare to SegWit.

First of all, introducing a new version of a transaction doesn't mean we stop supporting the current version. So all this is perfectly backwards compatible because clients can just continue making old style transactions. This means that nobody will end up stuck.

Using a tagged format for a transaction is a one time hard fork to upgrade the protocol and allow many more changes to be made with much lower impact on the system in the future. There are parallels to SegWit, it strives for the same goals, after-all. But where SegWit tries to adjust a static memory-format by re-purposing existing fields, Flexible transactions presents a coherent simple design that removes lots of conflicting concepts.

Most importantly, years after Flexible transactions has been introduced we can continue to benefit from the tagged system to extend and fix issues we find then we haven't thought of today. In the same, consistent, concepts.

We can fit more transactions in the same (block) space similarly to SegWit, the signatures can be pruned by full nodes without causing any security implications in both solutions. What SegWit doesn't do is allowing unused features to not use space. So if a transaction doesn't use NLockTime (which is near 100% of them) they will take space in SegWit but not in this proposal. Expect your transactions to be smaller and thus lower fee!

On size, SegWit proposes to gain 60% space. Which is by removing the signatures minus the overhead introduced. In my tests Flexible transactions showed 75% gain.

SegWit also describes changes how data is stored in the block. It creates an extra merkle tree it stores in the coinbase transaction. This requires all mining software to be upgraded. The Flexible Transactions proposal is in essence solving the same problem as SegWit and uses a cleaner solution where the extra merkle-tree items are just added to the existing merkle-tree.

At the start of the blog I mentioned a list of advantages that the authors of SegWit included. It turns out that those advantages themselves are completely not related to each other and they each have a very separate solution to their individual problems. The tricky part is that due to the requirement of old clients staying forwards-compatible they are forced to push them all into the one 'fix'.

Lets go over them individually;

### Malleability Fixes

Using this new version of a transaction data-structure solves all forms of known malleability.

### Linear scaling of sighash operations

If you fix malleability, you always end up fixing this issue too. So there is no differnence between the solutions.

### Signing of input values

This is included in this proposal.

### Increased security for multisig via pay-to-script-hash (P2SH)

The Flexible transactions proposal outlined in this document makes many of the additional changes in SegWit really easy to add at a later time. This change is one of them.

Bottom line, changing the security with a bigger hash in SegWit is only included in SegWit because SegWit doesn't provide a solution to the transaction versioning problem, and this solution is what makes it trivial to do separately.
With flexible transactions this change can now be done at any time in the future with minimal impact.

### Script versioning

Notice that this only introduces a version so we could increase it in future. It doesn't actually introduce a new version of Script.

SegWit adds a new byte for versioning. Flexible Transactions has an optional tag for it. Both support it, but there is clearly a difference here.

This is an excellent example where tagged formats shine brighter than a static memory format that SegWit uses because adding such a versioning tag is much cleaner and much easier and less intrusive to do with flexible transactions.

In contrast, SegWit has a byte reserved that you are not allowed to change yet. Imagine having to include "body background=white" in each and every html page because it was not allowed to leave it out. That is what SegWit does.

### Reducing UTXO growth

I suggest you read this point for yourself, its rather interestingly technical and I'm sure many will not fully grasp the idea. The bottom line of that they are claiming the UTXO database will avoid growing because SegWit doesn't allow more customers to be served.

I don't even know how to respond to such a solution. Its throwing out the baby with the bath water.

Database technology has matured immensely over the last 20 years, the database is tiny in comparison to what free and open source databases can do today. Granted, the UTXO database is slightly unfit for a normal SQL database, but telling customers to go elsewhere has never worked out for products in the long term.

### Compact fraud proofs

Again, not really included in SegWit, just started as a basis. The exact same basis is suggested for flexible transactions, and as such this is identical.

### What do we offer that SegWit doesn't offer?

• A transaction becomes extensible. Future modifications are cheap.
• A transaction gets smaller. Using less features also takes less space.
• We use an encoding for integers (var-int) that takes less bytes for larger numbers, moving away from a weird format not used anywhere else.
• We remove technical debt and simplify implementations. SegWit does the opposite.

## Conclusions

SegWit has some good ideas and some needed fixes. Stealing all the good ideas and improving on them can be done, but require a hard fork. This post shows that doing so gives you advantages that are quite significant and certainly worth it.

We introduced a tagged data structure. Conceptually like JSON and XML in that it is flexible, but the proposal is a compact and fast binary format. Using the Flexible Transaction data format allows many future innovations to be done cleanly in a consistent and, at a later stage, a more backwards compatible manner than SegWit is able to do, even if given much more time. We realize that separating the fundamental issues that SegWit tries to fix all in one go, is possible and each becomes much lower risk to Bitcoin as a system.

After SegWit has been in the design stage for a year and still we find show-stopping issues, delaying the release, I would argue that dropping the requirement of staying backwards compatible should be on the table.

The introduction of the Flexible Transaction upgrade has big benefits because the transaction design becomes extensible. A hardfork is done once to allow us to do soft upgrades in the future.

The Flexible transaction solution lowers the amount of changes required in the entire ecosystem. Not just for full nodes. Especially considering that many tools and wallets use shared libraries between them to actually create and parse transactions.

The Flexible Transaction upgrade proposal should be considered by anyone that cares about the protocol stability because its risk of failures during or after upgrading is several magnitudes lower than SegWit is and it removes technical debt that will allow us to innovate better into the future.

Flexible transactions are smaller, giving significant savings after pruning over SegWit.

Posted
Scaling Bitcoin

One of the most talked about topics in 2016 when it comes to Bitcoin is the lack of a good plan for growing and scaling the system into the future.

I find this curious, as anyone that starts to depend on any system should do at least some investigation on what kind of growth it can handle and whether it can actually do what we want it to, this year and 5 years or longer into the future.

In this post I want to investigate what kind of scaling we can expect Bitcoin to have and what we need to do to get more scaling out of the system if these expectations prove not to be enough.

## Goals

The number one reason that Bitcoin has value right now is its promise that Bitcoin will be used and seen as useful by millions of people in the not-so-far future. Any money only has value when enough people assign it value. Any money only has value when it can actually be used. If nobody accepts it for payments, the value will not be realized.

So the number one goal is to allow millions of people to be able to use Bitcoin in their day-to-day lives, where 'use' is defined as making at least one transaction a day. As with any technology, we don't expect this load to be available tomorrow or even this year, as growth happens over time and systems get built over time.

So let's have a goal of 50 million users sending one transaction a day using the Bitcoin network. Not today but 5 years into the future.

A further goal for home-users is that the rate at which they can process Bitcoin blocks should be at least five times the speed at which they get created. This means that if a system has no internet for an hour it would take around 20 minutes at full speed to catch up (process the backlog and process all new transactions generated while processing the backlog).
Faster is better, but slower than 5 times the creation speed is too slow.

## Baseline

Or, what is the current theoretical level of support?

Bitcoin, in the form of various node-software implementations, has had several years to mature. The leadership was not really focused on large growth of this basic layer, as we can see from the uptick in progress that started happening after Bitcoin Classic and Bitcoin Unlimited started competing with them. As usual, competition is good for the end-user and we see some promise of future gains appear.

But let's take a look at what kind of transaction load we could support today.

Last week forum.bitcoin.com published a video about time it takes to download, fully validate and check 7 years, or 420000 blocks of Bitcoin history (from day one of Bitcoin). This is 75GB of data which took 6 hours and 50 minutes to fully validate on mid-range hardware. It wasn't cheap hardware, but it was certainly not top-of-the-line or server hardware. In other words, it is a good baseline.

Taking a closer look at what was done in 6hours 50min:

• since block 295000 till block 420000 (125000 blocks) each and every transaction signature has been validated.
• A UTXO database of 11 million transactions with 40 million not yet spent Bitcoin addresses was built. (see the RPC call gettxoutsetinfo).
• The 125000 blocks contained 104847758 transactions which were all validated. That's 105 Million txs.

We understand that this was all done with no tweaks to the settings. This was a Classic 1.1.0 node (with performance equivalent to Bitcoin Core 0.12.1)

## What needs work?

Let's take a look and compare our baseline to our goal. Most people would like software to always get faster and better, but priorities matter and we should look at which parts need attention first.

Our goal was that individual nodes need to be able to process about 50 million transactions in 24 hours.

We noticed that in the baseline check of 6h50m, our node actually downloaded, stored, and checked almost 105 million transactions. This leads to a daily rate of 368 million transactions being downloaded and validated.

$$TX_{day} = {{104847758 tx \over { 6 * 60 + 50 minutes}} * 24 * 60 minutes} = 368 million$$

This means that our 5-years-in-the-future goal of 50 million transactions per day is already not an issue for bandwidth and for CPU power today. In fact, our baseline system managed to exceed our goal by a factor of 7.

Our baseline system could handle 7 times our 5 years in the future goal volume today!

Today, an individual Bitcoin node can download, store and validate 368 million transactions a day. That's many times the volume that has ever been sent using Bitcoin.

## How do I see the system scale?

### A typical home-node

A node that validates all blocks fully is needed in order to keep everyone honest. It also gives me the peace of mind that if I trust my node, I don't have to trust anyone else. So I foresee that you will get small communities gathering around full nodes. You can let your family use it, or maybe your football club or church will set one up just to support the community. Individuals will then make their phone-wallets have at least one host they trust, which is the one from your community.

This preserves Bitcoins greatest assets, you don't have to trust banks or governments. People trusting their local church or football club is much more normal and common to do.

Such a node would have no need to keep the full history from block zero. It would turn on pruning. With today's price of hardware this does not mean it would stop being able to serve historic blocks, because it could easily hold a month of blocks history. This does, however, mean we need to make the software a bit smarter. See some pruning ideas.

Hard drive space is cheap today. A 3TB harddrive stores 75 years of Bitcoin history at current block size. But what if we start getting to our goal. What kind of hard drive do we need?

The answer to that is that we don't need anything large at all. The idea that we need to have a larger hard drive because blocks are bigger is a misunderstanding. We should work on some pruning ideas in order to scale the system without everyone having to invest in storage space.

The last of the big components for our hone node is Internet connectivity. To reach our goal we need to be able to download 50 million transactions at about 300 bytes each over a 24 hours period.

$${50 000 000 * 300 \over {24 * 60 * 60}} = 173611 bytes/sec$$ $${173611 * 8} = 1.39 Mbit/sec$$

Ideally we go 5 times as fast to make sure that the node can 'catch up' if it were offline for some time. We may want to add some overhead for other parts as well. But 1.39Mbit minimum is available practically everywhere today. 2Gb which is 1440 times faster than what we need to reach our plan in 5 years is available in countries like Japan today.

Home nodes have absolutely nothing to fear from even a huge growth of Bitcoin to about 50 million daily users. Their hardware and internet will cover them with no pains, especially in about 5 years when more software optimizations may have been created.

Notice that there is no dependency on principles like Moore's law here, no hardware growth is needed to reach our goal.

## A typical user

A typical user is suggested to use a phone or hardware wallet. The actual requirements are really low to be able to make payments safely, and fast if you use SPV clients.

Current wallets are in need of some work, though. They are typically quite greedy in using network to update the state of the wallet. While useful, this is not wanted in many situations: when I'm abroad on an expensive data-plan, for instance.

There is no need to do any communication with the network before creating a transaction that pays a merchant. Only the actual payment itself needs to be transferred to the Bitcoin network.

A typical user uses a phone. On the topic of scaling there is little to nothing they must do in order to continue working with on-chain scaling.

Usability and related topics in thin clients do need substantial amounts of work though.

## A typical mining node

Miners need a full node. Where 'full' means that it validates all transactions. In addition to what a home-node needs, a miner also needs a fast connection between miners and to have a fast way to broadcast his blocks to other miners.

Just like with a typical home-node the amount of bandwidth and harddrive and CPU speed are already today mostly sufficient for being part of the network.

Additionally, a miner uses their node to create block templates. Which means they takes a section of the pool of unconfirmed transactions and creates a block based on that. This process has seen some optimizations already, but more could be made. For instance the getblocktemplate RPC call checks the block it just created for validity before it returns it. This check takes quite a lot of time and a simple solution would be to decouple the returning of the block and the validation so the miner can start mining optimistically over the check passing (it should pass in 100% of the cases anyway).

The bigger the blocks get, the more data is returned, and the system currently uses JSON which is almost the worst type of data-container for large binary data-blobs. A simple replacement of the RPC interface with something that just changes the communication format to be binary is relatively easy to do (a month project, probably) and likely needed for miners to not end up waiting too long on interface delays.

In our baseline node we explained that it took 7 hours to fully sync a brand new node from zero. This will stop being the case when we scale up to much bigger sizes. It will start taking a substantial amount of time to do the initial sync. Yet, a miner requires a fully synced node. Bitcoin Classic already has one big change there that will push down the validation time substantially. It introduced dynamic checkpoints which allow the node to skip validation of transaction data by assuming that about a week's worth of blocks will not be built on top of double-spend data. This would remove the validation of 100s of millions of transactions for a starting node.

Another suggestion for future Bitcoin clients meant for miners is that a new node can be pointed to a known and trusted node. The new node would then receive the UTXO and all other details it needs to be up and running quickly from this trusted node. Which means that after downloading only a couple of gigabytes you can have your new node up in 10 minutes.

The most important improvements for mining are various ways to ensure fast download and upload of the actual new blocks.

First there is xthin, which is a way to make a 1MB block only cost 25KB to send to all miners. This scaling is linear in that a 10MB block will likely be around 250KB to send.

Next to that is a technique I called "optimistic mining", which helps miners by splitting the uploading of blocks into two parts. One is a super fast notification of the new block. Just the block-header. A miner receiving such a header validates it has valid proof of work and then can start mining empty blocks on top. When the full block has arrived and all transactions are seen. All transactions in the mempool are updated to account for the new block, and last, a new block template is created with as many transactions as fit, only then does the miner start mining it.

A mining node has no need for either a wallet in their node or to have a history of blocks on their node, so they can turn on pruning.

Many of these techniques are already in development or planned to be developed in the next year or so. In order to reach our 50 million users per day in 5 years most of these will be more than enough to make a miner able to keep connected to the Bitcoin network without having to invest in a high-end server for the Bitcoin node.

## Conclusions

The goal I tried to argue from is 50 million users per day. This goal is a huge increase from today. But to make sure we do it properly, my goal is set 5 years in the future.

Scaling Bitcoin nodes is ultimately boring work with very little effort needed because it turns out that a modern simple system can already scale easily to 10000 times higher than the current maximum allowed size.

Scaling the entire system takes a little more work, but mostly because miners have not received a lot of new features that they would need in order to make scaling safe for them. Most of those features could be added in a matter of months, with technologies like xthin blocks and optimistic mining already well underway.

The conclusion that I have to draw is that the goal of 50M/day is not just reachable, the timeline of 5 years is likely one that we will beat quite easily.

Smart tricks like Lightning network are not mentioned at all in this document because there is no need for them. Bitcoin can scale on-chain quite easily with almost no risk. Ideas like Lightning are quite high risk because there are so many unknowns.

By far the biggest problem with regards to scaling today is the protocol limit of 1MB block size. This should be removed as soon as possible.

##### Edit history
• Fixed a KB -> Mb calculation
• fixed some grammer and other tweaks (thanks Philip!)
• Changed the 'safe multiplier' from 2 to 5.
Posted