BetCoin™ Introduces a Provably Fair Bitcoin Slot Machine ...

Dragonchain Great Reddit Scaling Bake-Off Public Proposal

Dragonchain Great Reddit Scaling Bake-Off Public Proposal

Dragonchain Public Proposal TL;DR:

Dragonchain has demonstrated twice Reddit’s entire total daily volume (votes, comments, and posts per Reddit 2019 Year in Review) in a 24-hour demo on an operational network. Every single transaction on Dragonchain is decentralized immediately through 5 levels of Dragon Net, and then secured with combined proof on Bitcoin, Ethereum, Ethereum Classic, and Binance Chain, via Interchain. At the time, in January 2020, the entire cost of the demo was approximately $25K on a single system (transaction fees locked at $0.0001/txn). With current fees (lowest fee $0.0000025/txn), this would cost as little as $625.
Watch Joe walk through the entire proposal and answer questions on YouTube.
This proposal is also available on the Dragonchain blog.

Hello Reddit and Ethereum community!

I’m Joe Roets, Founder & CEO of Dragonchain. When the team and I first heard about The Great Reddit Scaling Bake-Off we were intrigued. We believe we have the solutions Reddit seeks for its community points system and we have them at scale.
For your consideration, we have submitted our proposal below. The team at Dragonchain and I welcome and look forward to your technical questions, philosophical feedback, and fair criticism, to build a scaling solution for Reddit that will empower its users. Because our architecture is unlike other blockchain platforms out there today, we expect to receive many questions while people try to grasp our project. I will answer all questions here in this thread on Reddit, and I've answered some questions in the stream on YouTube.
We have seen good discussions so far in the competition. We hope that Reddit’s scaling solution will emerge from The Great Reddit Scaling Bake-Off and that Reddit will have great success with the implementation.

Executive summary

Dragonchain is a robust open source hybrid blockchain platform that has proven to withstand the passing of time since our inception in 2014. We have continued to evolve to harness the scalability of private nodes, yet take full advantage of the security of public decentralized networks, like Ethereum. We have a live, operational, and fully functional Interchain network integrating Bitcoin, Ethereum, Ethereum Classic, and ~700 independent Dragonchain nodes. Every transaction is secured to Ethereum, Bitcoin, and Ethereum Classic. Transactions are immediately usable on chain, and the first decentralization is seen within 20 seconds on Dragon Net. Security increases further to public networks ETH, BTC, and ETC within 10 minutes to 2 hours. Smart contracts can be written in any executable language, offering full freedom to existing developers. We invite any developer to watch the demo, play with our SDK’s, review open source code, and to help us move forward. Dragonchain specializes in scalable loyalty & rewards solutions and has built a decentralized social network on chain, with very affordable transaction costs. This experience can be combined with the insights Reddit and the Ethereum community have gained in the past couple of months to roll out the solution at a rapid pace.

Response and PoC

In The Great Reddit Scaling Bake-Off post, Reddit has asked for a series of demonstrations, requirements, and other considerations. In this section, we will attempt to answer all of these requests.

Live Demo

A live proof of concept showing hundreds of thousands of transactions
On Jan 7, 2020, Dragonchain hosted a 24-hour live demonstration during which a quarter of a billion (250 million+) transactions executed fully on an operational network. Every single transaction on Dragonchain is decentralized immediately through 5 levels of Dragon Net, and then secured with combined proof on Bitcoin, Ethereum, Ethereum Classic, and Binance Chain, via Interchain. This means that every single transaction is secured by, and traceable to these networks. An attack on this system would require a simultaneous attack on all of the Interchained networks.
24 hours in 4 minutes (YouTube):
24 hours in 4 minutes
The demonstration was of a single business system, and any user is able to scale this further, by running multiple systems simultaneously. Our goals for the event were to demonstrate a consistent capacity greater than that of Visa over an extended time period.
Tooling to reproduce our demo is available here:
https://github.com/dragonchain/spirit-bomb

Source Code

Source code (for on & off-chain components as well tooling used for the PoC). The source code does not have to be shared publicly, but if Reddit decides to use a particular solution it will need to be shared with Reddit at some point.

Scaling

How it works & scales

Architectural Scaling

Dragonchain’s architecture attacks the scalability issue from multiple angles. Dragonchain is a hybrid blockchain platform, wherein every transaction is protected on a business node to the requirements of that business or purpose. A business node may be held completely private or may be exposed or replicated to any level of exposure desired.
Every node has its own blockchain and is independently scalable. Dragonchain established Context Based Verification as its consensus model. Every transaction is immediately usable on a trust basis, and in time is provable to an increasing level of decentralized consensus. A transaction will have a level of decentralization to independently owned and deployed Dragonchain nodes (~700 nodes) within seconds, and full decentralization to BTC and ETH within minutes or hours. Level 5 nodes (Interchain nodes) function to secure all transactions to public or otherwise external chains such as Bitcoin and Ethereum. These nodes scale the system by aggregating multiple blocks into a single Interchain transaction on a cadence. This timing is configurable based upon average fees for each respective chain. For detailed information about Dragonchain’s architecture, and Context Based Verification, please refer to the Dragonchain Architecture Document.

Economic Scaling

An interesting feature of Dragonchain’s network consensus is its economics and scarcity model. Since Dragon Net nodes (L2-L4) are independent staking nodes, deployment to cloud platforms would allow any of these nodes to scale to take on a large percentage of the verification work. This is great for scalability, but not good for the economy, because there is no scarcity, and pricing would develop a downward spiral and result in fewer verification nodes. For this reason, Dragonchain uses TIME as scarcity.
TIME is calculated as the number of Dragons held, multiplied by the number of days held. TIME influences the user’s access to features within the Dragonchain ecosystem. It takes into account both the Dragon balance and length of time each Dragon is held. TIME is staked by users against every verification node and dictates how much of the transaction fees are awarded to each participating node for every block.
TIME also dictates the transaction fee itself for the business node. TIME is staked against a business node to set a deterministic transaction fee level (see transaction fee table below in Cost section). This is very interesting in a discussion about scaling because it guarantees independence for business implementation. No matter how much traffic appears on the entire network, a business is guaranteed to not see an increased transaction fee rate.

Scaled Deployment

Dragonchain uses Docker and Kubernetes to allow the use of best practices traditional system scaling. Dragonchain offers managed nodes with an easy to use web based console interface. The user may also deploy a Dragonchain node within their own datacenter or favorite cloud platform. Users have deployed Dragonchain nodes on-prem on Amazon AWS, Google Cloud, MS Azure, and other hosting platforms around the world. Any executable code, anything you can write, can be written into a smart contract. This flexibility is what allows us to say that developers with no blockchain experience can use any code language to access the benefits of blockchain. Customers have used NodeJS, Python, Java, and even BASH shell script to write smart contracts on Dragonchain.
With Docker containers, we achieve better separation of concerns, faster deployment, higher reliability, and lower response times.
We chose Kubernetes for its self-healing features, ability to run multiple services on one server, and its large and thriving development community. It is resilient, scalable, and automated. OpenFaaS allows us to package smart contracts as Docker images for easy deployment.
Contract deployment time is now bounded only by the size of the Docker image being deployed but remains fast even for reasonably large images. We also take advantage of Docker’s flexibility and its ability to support any language that can run on x86 architecture. Any image, public or private, can be run as a smart contract using Dragonchain.

Flexibility in Scaling

Dragonchain’s architecture considers interoperability and integration as key features. From inception, we had a goal to increase adoption via integration with real business use cases and traditional systems.
We envision the ability for Reddit, in the future, to be able to integrate alternate content storage platforms or other financial services along with the token.
  • LBRY - To allow users to deploy content natively to LBRY
  • MakerDAO to allow users to lend small amounts backed by their Reddit community points.
  • STORJ/SIA to allow decentralized on chain storage of portions of content. These integrations or any other are relatively easy to integrate on Dragonchain with an Interchain implementation.

Cost

Cost estimates (on-chain and off-chain) For the purpose of this proposal, we assume that all transactions are on chain (posts, replies, and votes).
On the Dragonchain network, transaction costs are deterministic/predictable. By staking TIME on the business node (as described above) Reddit can reduce transaction costs to as low as $0.0000025 per transaction.
Dragonchain Fees Table

Getting Started

How to run it
Building on Dragonchain is simple and requires no blockchain experience. Spin up a business node (L1) in our managed environment (AWS), run it in your own cloud environment, or on-prem in your own datacenter. Clear documentation will walk you through the steps of spinning up your first Dragonchain Level 1 Business node.
Getting started is easy...
  1. Download Dragonchain’s dctl
  2. Input three commands into a terminal
  3. Build an image
  4. Run it
More information can be found in our Get started documents.

Architecture
Dragonchain is an open source hybrid platform. Through Dragon Net, each chain combines the power of a public blockchain (like Ethereum) with the privacy of a private blockchain.
Dragonchain organizes its network into five separate levels. A Level 1, or business node, is a totally private blockchain only accessible through the use of public/private keypairs. All business logic, including smart contracts, can be executed on this node directly and added to the chain.
After creating a block, the Level 1 business node broadcasts a version stripped of sensitive private data to Dragon Net. Three Level 2 Validating nodes validate the transaction based on guidelines determined from the business. A Level 3 Diversity node checks that the level 2 nodes are from a diverse array of locations. A Level 4 Notary node, hosted by a KYC partner, then signs the validation record received from the Level 3 node. The transaction hash is ledgered to the Level 5 public chain to take advantage of the hash power of massive public networks.
Dragon Net can be thought of as a “blockchain of blockchains”, where every level is a complete private blockchain. Because an L1 can send to multiple nodes on a single level, proof of existence is distributed among many places in the network. Eventually, proof of existence reaches level 5 and is published on a public network.

API Documentation

APIs (on chain & off)

SDK Source

Nobody’s Perfect

Known issues or tradeoffs
  • Dragonchain is open source and even though the platform is easy enough for developers to code in any language they are comfortable with, we do not have so large a developer community as Ethereum. We would like to see the Ethereum developer community (and any other communities) become familiar with our SDK’s, our solutions, and our platform, to unlock the full potential of our Ethereum Interchain. Long ago we decided to prioritize both Bitcoin and Ethereum Interchains. We envision an ecosystem that encompasses different projects to give developers the ability to take full advantage of all the opportunities blockchain offers to create decentralized solutions not only for Reddit but for all of our current platforms and systems. We believe that together we will take the adoption of blockchain further. We currently have additional Interchain with Ethereum Classic. We look forward to Interchain with other blockchains in the future. We invite all blockchains projects who believe in decentralization and security to Interchain with Dragonchain.
  • While we only have 700 nodes compared to 8,000 Ethereum and 10,000 Bitcoin nodes. We harness those 18,000 nodes to scale to extremely high levels of security. See Dragonchain metrics.
  • Some may consider the centralization of Dragonchain’s business nodes as an issue at first glance, however, the model is by design to protect business data. We do not consider this a drawback as these nodes can make any, none, or all data public. Depending upon the implementation, every subreddit could have control of its own business node, for potential business and enterprise offerings, bringing new alternative revenue streams to Reddit.

Costs and resources

Summary of cost & resource information for both on-chain & off-chain components used in the PoC, as well as cost & resource estimates for further scaling. If your PoC is not on mainnet, make note of any mainnet caveats (such as congestion issues).
Every transaction on the PoC system had a transaction fee of $0.0001 (one-hundredth of a cent USD). At 256MM transactions, the demo cost $25,600. With current operational fees, the same demonstration would cost $640 USD.
For the demonstration, to achieve throughput to mimic a worldwide payments network, we modeled several clients in AWS and 4-5 business nodes to handle the traffic. The business nodes were tuned to handle higher throughput by adjusting memory and machine footprint on AWS. This flexibility is valuable to implementing a system such as envisioned by Reddit. Given that Reddit’s daily traffic (posts, replies, and votes) is less than half that of our demo, we would expect that the entire Reddit system could be handled on 2-5 business nodes using right-sized containers on AWS or similar environments.
Verification was accomplished on the operational Dragon Net network with over 700 independently owned verification nodes running around the world at no cost to the business other than paid transaction fees.

Requirements

Scaling

This PoC should scale to the numbers below with minimal costs (both on & off-chain). There should also be a clear path to supporting hundreds of millions of users.
Over a 5 day period, your scaling PoC should be able to handle:
*100,000 point claims (minting & distributing points) *25,000 subscriptions *75,000 one-off points burning *100,000 transfers
During Dragonchain’s 24 hour demo, the above required numbers were reached within the first few minutes.
Reddit’s total activity is 9000% more than Ethereum’s total transaction level. Even if you do not include votes, it is still 700% more than Ethereum’s current volume. Dragonchain has demonstrated that it can handle 250 million transactions a day, and it’s architecture allows for multiple systems to work at that level simultaneously. In our PoC, we demonstrate double the full capacity of Reddit, and every transaction was proven all the way to Bitcoin and Ethereum.
Reddit Scaling on Ethereum

Decentralization

Solutions should not depend on any single third-party provider. We prefer solutions that do not depend on specific entities such as Reddit or another provider, and solutions with no single point of control or failure in off-chain components but recognize there are numerous trade-offs to consider
Dragonchain’s architecture calls for a hybrid approach. Private business nodes hold the sensitive data while the validation and verification of transactions for the business are decentralized within seconds and secured to public blockchains within 10 minutes to 2 hours. Nodes could potentially be controlled by owners of individual subreddits for more organic decentralization.
  • Billing is currently centralized - there is a path to federation and decentralization of a scaled billing solution.
  • Operational multi-cloud
  • Operational on-premises capabilities
  • Operational deployment to any datacenter
  • Over 700 independent Community Verification Nodes with proof of ownership
  • Operational Interchain (Interoperable to Bitcoin, Ethereum, and Ethereum Classic, open to more)

Usability Scaling solutions should have a simple end user experience.

Users shouldn't have to maintain any extra state/proofs, regularly monitor activity, keep track of extra keys, or sign anything other than their normal transactions
Dragonchain and its customers have demonstrated extraordinary usability as a feature in many applications, where users do not need to know that the system is backed by a live blockchain. Lyceum is one of these examples, where the progress of academy courses is being tracked, and successful completion of courses is rewarded with certificates on chain. Our @Save_The_Tweet bot is popular on Twitter. When used with one of the following hashtags - #please, #blockchain, #ThankYou, or #eternalize the tweet is saved through Eternal to multiple blockchains. A proof report is available for future reference. Other examples in use are DEN, our decentralized social media platform, and our console, where users can track their node rewards, view their TIME, and operate a business node.
Examples:

Transactions complete in a reasonable amount of time (seconds or minutes, not hours or days)
All transactions are immediately usable on chain by the system. A transaction begins the path to decentralization at the conclusion of a 5-second block when it gets distributed across 5 separate community run nodes. Full decentralization occurs within 10 minutes to 2 hours depending on which interchain (Bitcoin, Ethereum, or Ethereum Classic) the transaction hits first. Within approximately 2 hours, the combined hash power of all interchained blockchains secures the transaction.

Free to use for end users (no gas fees, or fixed/minimal fees that Reddit can pay on their behalf)
With transaction pricing as low as $0.0000025 per transaction, it may be considered reasonable for Reddit to cover transaction fees for users.
All of Reddit's Transactions on Blockchain (month)
Community points can be earned by users and distributed directly to their Reddit account in batch (as per Reddit minting plan), and allow users to withdraw rewards to their Ethereum wallet whenever they wish. Withdrawal fees can be paid by either user or Reddit. This model has been operating inside the Dragonchain system since 2018, and many security and financial compliance features can be optionally added. We feel that this capability greatly enhances user experience because it is seamless to a regular user without cryptocurrency experience, yet flexible to a tech savvy user. With regard to currency or token transactions, these would occur on the Reddit network, verified to BTC and ETH. These transactions would incur the $0.0000025 transaction fee. To estimate this fee we use the monthly active Reddit users statista with a 60% adoption rate and an estimated 10 transactions per month average resulting in an approximate $720 cost across the system. Reddit could feasibly incur all associated internal network charges (mining/minting, transfer, burn) as these are very low and controllable fees.
Reddit Internal Token Transaction Fees

Reddit Ethereum Token Transaction Fees
When we consider further the Ethereum fees that might be incurred, we have a few choices for a solution.
  1. Offload all Ethereum transaction fees (user withdrawals) to interested users as they wish to withdraw tokens for external use or sale.
  2. Cover Ethereum transaction fees by aggregating them on a timed schedule. Users would request withdrawal (from Reddit or individual subreddits), and they would be transacted on the Ethereum network every hour (or some other schedule).
  3. In a combination of the above, customers could cover aggregated fees.
  4. Integrate with alternate Ethereum roll up solutions or other proposals to aggregate minting and distribution transactions onto Ethereum.

Bonus Points

Users should be able to view their balances & transactions via a blockchain explorer-style interface
From interfaces for users who have no knowledge of blockchain technology to users who are well versed in blockchain terms such as those present in a typical block explorer, a system powered by Dragonchain has flexibility on how to provide balances and transaction data to users. Transactions can be made viewable in an Eternal Proof Report, which displays raw data along with TIME staking information and traceability all the way to Bitcoin, Ethereum, and every other Interchained network. The report shows fields such as transaction ID, timestamp, block ID, multiple verifications, and Interchain proof. See example here.
Node payouts within the Dragonchain console are listed in chronological order and can be further seen in either Dragons or USD. See example here.
In our social media platform, Dragon Den, users can see, in real-time, their NRG and MTR balances. See example here.
A new influencer app powered by Dragonchain, Raiinmaker, breaks down data into a user friendly interface that shows coin portfolio, redeemed rewards, and social scores per campaign. See example here.

Exiting is fast & simple
Withdrawing funds on Dragonchain’s console requires three clicks, however, withdrawal scenarios with more enhanced security features per Reddit’s discretion are obtainable.

Interoperability Compatibility with third party apps (wallets/contracts/etc) is necessary.
Proven interoperability at scale that surpasses the required specifications. Our entire platform consists of interoperable blockchains connected to each other and traditional systems. APIs are well documented. Third party permissions are possible with a simple smart contract without the end user being aware. No need to learn any specialized proprietary language. Any code base (not subsets) is usable within a Docker container. Interoperable with any blockchain or traditional APIs. We’ve witnessed relatively complex systems built by engineers with no blockchain or cryptocurrency experience. We’ve also demonstrated the creation of smart contracts within minutes built with BASH shell and Node.js. Please see our source code and API documentation.

Scaling solutions should be extensible and allow third parties to build on top of it Open source and extensible
APIs should be well documented and stable

Documentation should be clear and complete
For full documentation, explore our docs, SDK’s, Github repo’s, architecture documents, original Disney documentation, and other links or resources provided in this proposal.

Third-party permissionless integrations should be possible & straightforward Smart contracts are Docker based, can be written in any language, use full language (not subsets), and can therefore be integrated with any system including traditional system APIs. Simple is better. Learning an uncommon or proprietary language should not be necessary.
Advanced knowledge of mathematics, cryptography, or L2 scaling should not be required. Compatibility with common utilities & toolchains is expected.
Dragonchain business nodes and smart contracts leverage Docker to allow the use of literally any language or executable code. No proprietary language is necessary. We’ve witnessed relatively complex systems built by engineers with no blockchain or cryptocurrency experience. We’ve also demonstrated the creation of smart contracts within minutes built with BASH shell and Node.js.

Bonus

Bonus Points: Show us how it works. Do you have an idea for a cool new use case for Community Points? Build it!

TIME

Community points could be awarded to Reddit users based upon TIME too, whereas the longer someone is part of a subreddit, the more community points someone naturally gained, even if not actively commenting or sharing new posts. A daily login could be required for these community points to be credited. This grants awards to readers too and incentivizes readers to create an account on Reddit if they browse the website often. This concept could also be leveraged to provide some level of reputation based upon duration and consistency of contribution to a community subreddit.

Dragon Den

Dragonchain has already built a social media platform that harnesses community involvement. Dragon Den is a decentralized community built on the Dragonchain blockchain platform. Dragon Den is Dragonchain’s answer to fake news, trolling, and censorship. It incentivizes the creation and evaluation of quality content within communities. It could be described as being a shareholder of a subreddit or Reddit in its entirety. The more your subreddit is thriving, the more rewarding it will be. Den is currently in a public beta and in active development, though the real token economy is not live yet. There are different tokens for various purposes. Two tokens are Lair Ownership Rights (LOR) and Lair Ownership Tokens (LOT). LOT is a non-fungible token for ownership of a specific Lair. LOT will only be created and converted from LOR.
Energy (NRG) and Matter (MTR) work jointly. Your MTR determines how much NRG you receive in a 24-hour period. Providing quality content, or evaluating content will earn MTR.

Security. Users have full ownership & control of their points.
All community points awarded based upon any type of activity or gift, are secured and provable to all Interchain networks (currently BTC, ETH, ETC). Users are free to spend and withdraw their points as they please, depending on the features Reddit wants to bring into production.

Balances and transactions cannot be forged, manipulated, or blocked by Reddit or anyone else
Users can withdraw their balance to their ERC20 wallet, directly through Reddit. Reddit can cover the fees on their behalf, or the user covers this with a portion of their balance.

Users should own their points and be able to get on-chain ERC20 tokens without permission from anyone else
Through our console users can withdraw their ERC20 rewards. This can be achieved on Reddit too. Here is a walkthrough of our console, though this does not show the quick withdrawal functionality, a user can withdraw at any time. https://www.youtube.com/watch?v=aNlTMxnfVHw

Points should be recoverable to on-chain ERC20 tokens even if all third-parties involved go offline
If necessary, signed transactions from the Reddit system (e.g. Reddit + Subreddit) can be sent to the Ethereum smart contract for minting.

A public, third-party review attesting to the soundness of the design should be available
To our knowledge, at least two large corporations, including a top 3 accounting firm, have conducted positive reviews. These reviews have never been made public, as Dragonchain did not pay or contract for these studies to be released.

Bonus points
Public, third-party implementation review available or in progress
See above

Compatibility with HSMs & hardware wallets
For the purpose of this proposal, all tokenization would be on the Ethereum network using standard token contracts and as such, would be able to leverage all hardware wallet and Ethereum ecosystem services.

Other Considerations

Minting/distributing tokens is not performed by Reddit directly
This operation can be automated by smart contract on Ethereum. Subreddits can if desired have a role to play.

One off point burning, as well as recurring, non-interactive point burning (for subreddit memberships) should be possible and scalable
This is possible and scalable with interaction between Dragonchain Reddit system and Ethereum token contract(s).

Fully open-source solutions are strongly preferred
Dragonchain is fully open source (see section on Disney release after conclusion).

Conclusion

Whether it is today, or in the future, we would like to work together to bring secure flexibility to the highest standards. It is our hope to be considered by Ethereum, Reddit, and other integrative solutions so we may further discuss the possibilities of implementation. In our public demonstration, 256 million transactions were handled in our operational network on chain in 24 hours, for the low cost of $25K, which if run today would cost $625. Dragonchain’s interoperable foundation provides the atmosphere necessary to implement a frictionless community points system. Thank you for your consideration of our proposal. We look forward to working with the community to make something great!

Disney Releases Blockchain Platform as Open Source

The team at Disney created the Disney Private Blockchain Platform. The system was a hybrid interoperable blockchain platform for ledgering and smart contract development geared toward solving problems with blockchain adoption and usability. All objective evaluation would consider the team’s output a success. We released a list of use cases that we explored in some capacity at Disney, and our input on blockchain standardization as part of our participation in the W3C Blockchain Community Group.
https://lists.w3.org/Archives/Public/public-blockchain/2016May/0052.html

Open Source

In 2016, Roets proposed to release the platform as open source to spread the technology outside of Disney, as others within the W3C group were interested in the solutions that had been created inside of Disney.
Following a long process, step by step, the team met requirements for release. Among the requirements, the team had to:
  • Obtain VP support and approval for the release
  • Verify ownership of the software to be released
  • Verify that no proprietary content would be released
  • Convince the organization that there was a value to the open source community
  • Convince the organization that there was a value to Disney
  • Offer the plan for ongoing maintenance of the project outside of Disney
  • Itemize competing projects
  • Verify no conflict of interest
  • Preferred license
  • Change the project name to not use the name Disney, any Disney character, or any other associated IP - proposed Dragonchain - approved
  • Obtain legal approval
  • Approval from corporate, parks, and other business units
  • Approval from multiple Disney patent groups Copyright holder defined by Disney (Disney Connected and Advanced Technologies)
  • Trademark searches conducted for the selected name Dragonchain
  • Obtain IT security approval
  • Manual review of OSS components conducted
  • OWASP Dependency and Vulnerability Check Conducted
  • Obtain technical (software) approval
  • Offer management, process, and financial plans for the maintenance of the project.
  • Meet list of items to be addressed before release
  • Remove all Disney project references and scripts
  • Create a public distribution list for email communications
  • Remove Roets’ direct and internal contact information
  • Create public Slack channel and move from Disney slack channels
  • Create proper labels for issue tracking
  • Rename internal private Github repository
  • Add informative description to Github page
  • Expand README.md with more specific information
  • Add information beyond current “Blockchains are Magic”
  • Add getting started sections and info on cloning/forking the project
  • Add installation details
  • Add uninstall process
  • Add unit, functional, and integration test information
  • Detail how to contribute and get involved
  • Describe the git workflow that the project will use
  • Move to public, non-Disney git repository (Github or Bitbucket)
  • Obtain Disney Open Source Committee approval for release
On top of meeting the above criteria, as part of the process, the maintainer of the project had to receive the codebase on their own personal email and create accounts for maintenance (e.g. Github) with non-Disney accounts. Given the fact that the project spanned multiple business units, Roets was individually responsible for its ongoing maintenance. Because of this, he proposed in the open source application to create a non-profit organization to hold the IP and maintain the project. This was approved by Disney.
The Disney Open Source Committee approved the application known as OSSRELEASE-10, and the code was released on October 2, 2016. Disney decided to not issue a press release.
Original OSSRELASE-10 document

Dragonchain Foundation

The Dragonchain Foundation was created on January 17, 2017. https://den.social/l/Dragonchain/24130078352e485d96d2125082151cf0/dragonchain-and-disney/
submitted by j0j0r0 to ethereum [link] [comments]

Augur V2 is the next big step in Defi

This 6 month old Augur V2 video got me excited. I thought I’d share its value proposition, which I feel is currently being overlooked.
If you’ve been in the space for some time, you know what Augur is: a decentralized prediction market and the biggest (in ETH)/earliest ICO on Ethereum. Prediction markets allow for better forecasting by leveraging the power of incentivized wisdom of the crowd. V2 will soon launch with a revamped UI, cheap 0x orders and stablecoin integration. It’s set to become the most accessible, fair and open betting platform out there.
What you may not realize is its impact in the Defi space. Each market/prediction/question is represented by a token that can be traded in other Defi apps. This gives it incredible flexibility. Consider these possibilities:
This synergetic composability gets incredibly interesting when combined with other Defi legos. How about token sets based on bets between the ratios of active addresses on Ethereum vs Bitcoin? Why not make a Uniswap pair between a Real-T token and a bet against Detroit real-estate to hedge your position and gain transaction fees on the side?
Tokenomics
With growing interest over new Defi tokens, REP will no doubt position itself among the top. It’s one of the few that actually benefits from using a blockchain and has a utility that isn’t just governance related. Staked REP consensus is used to validate markets and collect fees in the process.
We’ve seen most successful Defi tokens pick up steam, especially in the past month, as mirrored by their sharp price increases: BNT +200%, KNC +90%, LEND + 70%, MKR +60%, LRC +140%. Augur V1 markets aren’t being used right now since the long awaited V2 is just around the corner. The repeated additional delays in V2’s launch date have kept its price comparatively low.
With that in mind, if one believes in the team’s ability to deliver and for Defi to continue growing, REP seems to be an extremely strong long term play. Whether you're a token holder or not, you'll likely see its contribution in many spheres of the Defi world. The above examples only scratch the surface of what it enables.
Disclaimer - I own some REP
For more info: Augur V2 Whitepaper Final pre-launch tasks The Augur Edge by pacific_Oc3an
submitted by Owdy to ethfinance [link] [comments]

Same lies about Dash still floating around - Time for another crackdown

This article was caught by our spamfilter because it's hosted on a reward-for-article platform (similar to steemit). Nonetheless I will respond to it, as it recycles the same old lies about Dash many of us have grown used to and it serves as in illustration that these lies and accusations are still after 6 years regurgitated 1:1 by ignorant bystanders without a single critical thought behind them.
The article is a real mess as you will notice. The author didn't even bother to edit the question section into Dash from ZCash. Anyway, that's not the real issue. This is about the section "Dash Launch And Issuance" which I am about to respond to.
The Dash cryptocurrency has a rather controversial launch and issuance as it involved a 2-day ‘instamine’ or ‘premine’
There is a huge difference between a premine and a fastmine. Conflating the two shows the author is either ignorant and thus unqualified to talk about the subject matter or tries to mislead the reader. A premine means the public is excluded from mining, which provably never happened in Dash, thus the term is 100% inapplicable.
According to Dash’s founder, Evan Duffield, this premine was the result of a bug. However, there are many people who beg to differ.
Those who "beg to differ" are idiots plain and simple. Because we have already proven that the same thing happened 2.5 years earlier in Litecoin from which Dash originally forked off (and not from Bitcoin as the author initially claims at the beginning of the article).
To ‘resolve’ the situation, the majority of these instamined coins were sold for very low prices on exchanges. However, it is widely believed that most of these coins were simply scooped up by Evan Duffield and the Dash core team for incredibly low prices.
This is my favorite part, because it's probably the only original thought in the article, as I have never heard anyone make up something so incoherently dumb with regards to Dash's launch.
First they say, Evan mined "all the coins by himself" now suddenly he's buying coins for "incredibly low prices"?! So, which is it? Can't have it both ways. Or did he buy coins from himself?
Let's cut the crap. Is it too hard to simply admit the coins were sold on the free and open market to ANYONE willing to buy them?? Nothing was sold to "resolve" anything.
Anyone, not just Evan and the team, had the chance to buy cheap coins for under a Dollar, which was fair and square, hands down. Those who missed out crying foul today are really just trying too hard. Also Dash Core did not exist in 2014. Again the author shows his ignorance further disqualifying himself.
Duffield released Dash before its intended release date (which means the instamine that occurred can be classified as a premine).
This is a lie. It did not happen as anyone can see on the blockchain. The author is a liar.
Dash’s instamine happened because Dash was launched with a block reward of 500 coins per block before it was abruptly cut 2 days later after 2million coins were instamined.
Of course the fix would happen "abruptly" as soon as the necessary blockheight was reached, otherwise it would've gone on forever. It was a mess up and nobody denies this. The fact remains that the coins were sold for pennies as my analysis has shown. Here Evan talks about the launch himself. He clearly states that coins were spilt out too fast and he was under immense pressure to fix it asap.
Initially, Dash was only minable on Linux, which could have been intended so that fewer people would mine Dash and Duffield and Dash’s core team could dominate mining (over 90% of all computers use the Windows operating system).
Conjecture without a shred of evidence assuming ill-intent over good faith. Back in 2014 mining was still a niche hobby by nerds where Linux was the dominating operating system. Evan is a nerd who was using Linux at the time. He didn't own a Windows machine. If he did, why would he offer a reward to someone compiling a Windows binary? If he was the evil fastminer everyone accuses him of, it would be in his best interest to delay Windows mining as much as possible. The accusation holds no water.
Evan Duffield was able to temporarily mine Dash by himself for a while when the public miner wasn't working.
Lie. Lie. Lie. This never happened and the author is still a liar. I know what this alludes to and I have thoroughly debunked it as the nonsense it is.
Dash was initially meant to have a max supply of 80 million DASH, but Evan Duffield held an obscure poll where the majority of the voters voted to reduce the Dash max coin supply from 80million to 20million (which instantly made the instamined DASH worth more).
First of all the value of all coins goes up with a lower supply, not just those that were first mined (duh!). This is intellectual dishonesty, but hardly surprising considering the previous lies of the author. Second, this poll is near worthless as a historic summary as it ignores what went down in the community before: Watch this analysis by Tao of Satoshi to learn that there was major community pressure on Evan's back with endless debates on how to change the reward structure and emission rate of Dash until it was finally settled after a full 7 adjustments in both directions. The narrative that there was one evil singular supply decrease to maliciously enrich Evan is completely stupid.
All in all, Dash’s launch is shrouded in controversy, and it's clear that Evan Duffield and Dash developers benefited tremendously from how everything went down.
The only thing shrouded in mystery is why anyone would believe these false conclusions, accusations and flat out lies when the facts are readily accessible. Of course the founder of any cryptocurrency benefits the most. It's how it's supposed to be. Satoshi "benefitted tremendously" as well from launching Bitcoin. The fact that Evan was able to quit his day job and work for Dash full time was the best thing that could have ever happened to this project. Of course our detractors don't like that, because they don't want to see Dash succeed. But we did and we're still here after 6 years of ceaseless innovation. Repeating ancient lies cannot undo that.
submitted by Basilpop to dashpay [link] [comments]

The elephant in the (Crypto) room: "Mining" and its energy waste

I know this post is a bit of a wall of text but hear me out. I do my best to explain my thoughts on the drawbacks of mining and why cryptos that cut out mining are so important.
"Mining" is a misnomer. To laypeople, using this term to describe the consensus mechanism for Proof of Work cryptocurrencies makes it sound like something productive and worthwhile. Who would criticize someone with the admirable and noble task of working to extract gold from the Earth? A valuable piece of metal is produced thanks to their hard work. But crypto mining is different; while it does have a purpose, it is far from productive.

So what is bitcoin mining? If you're to believe the most basic explanations offered such as from this video (https://www.youtube.com/watch?v=GmOzih6I1zs), miners solve "complex math problems". I can still remember when I heard this for the first time (years ago) and even though I'm pretty mathematically inclined, I had assumed this meant that these complex math problems were actually useful and necessary to 'unlock' those bitcoins somehow, and for a long time I didn't think anything more of it. To my mind, I imagined it like there's a million problems to solve and each time you solve one you get a reward. The math problem might have been, for example, to find the next largest prime. Instead the actual problem is, at its most basic level, nonce finding. See https://en.bitcoin.it/wiki/Nonce. Different coins or forks may use a different problem but the end result is the same - energy is spent solving a pointless problem ('pointless' in the sense that the actual math answer doesn't benefit anyone).

In reality bitcoin mining could be better described as "provably expending energy in exchange for lottery tickets". It's an arms race of everyone competing to waste energy. The more energy wasted, the more likely one is to win the lottery. See here for an example: https://www.youtube.com/watch?v=K8kua5B5K3I&t=2m44s. I find it abhorrent that there are entire businesses (at several scales at that) set up primarily to "mine" bitcoin or other coins. I see videos like this one (Digital Gold: https://www.youtube.com/watch?v=kxbCHlXZ-0U) and think it bizarre that it's considered acceptable for businesses set up to waste energy to protect the network and that people are so sad when the market takes a turn and they have to close up shop. Your business model is to compete with other people to waste energy to earn lottery tickets that have variable value. Those who can lower their operating costs the most will be the most profitable (or with the way difficulty adjustments happen, perhaps the *only* ones profitable). A portion of the money flowing in to buy BitCoin is being used to prop up these wasteful businesses. Because it's considered normal by now people don't get outraged at this fact.

Some people who have been around crypto for years take it for granted that this type of process is necessary for security of the network, and to some extent this misunderstanding is forgivable as it is the oldest method and has worked quite well especially at small scale (not mass adoption) when the total energy expenditure was not all that high. Proof of Stake cryptos have demonstrated this is not the case (that the waste is necessary), and in particular cryptos like Nano with its Delegated Proof of Stake show potential for being just as, if not more, secure than PoW coins due to there being less centralization pressure due to having no significant incentive to trying to control more of the vote versus economies of scale pushing the small miners out of business in PoW. A big part of the reason BitCoin transactions became so expensive in Dec 2017 was that to "buy" a transaction in the BitCoin network you had to pay for a part of the combined energy wastage of the network; the other component being that you're also in a bidding war against other people determined to get their transaction included in the next block. So your transaction fee (aka 'mining fee') is you trying to outbid other people to see who gets to pay for the person wasting electricity. Imagine if each end-user scoffing at the $20+ withdraw fee on coinbase at the time actually understood what was behind that fee rather than thinking of it as a nebulous "network fee".

A quote I saw on cc that exemplifies this mindset is as follows:
"And a chain with no fees has no mechanism to pay for security. There NEED to be fees, they just need to be lower than with fiat payment systems."

So many of the BitCoin clones/forks make some attempt to mitigate this problem by, for example, increasing blocksize or changing other parameters like block times. In the end though, most of them are still based on this method of energy wastage to secure the network, aka Proof of Work.
Now if there were no more efficient method than PoW mining then it might be fair to say that its energy expenditure (comparable to the entire energy use of a small country like Belgium) is a necessary price to pay for the value provided by the unique features of the network. In other words, that the energy cost is 'worth it'. The thing is though, there *are* ways to secure a network with far less (or virtually no) energy cost and Nano provides one such case.

Does anyone else find it insane that people in this space think it's normal the energy waste that goes into so called "mining"? Do we need to re-label mining to something that better reflect its nature? Because the end user is generally not involved with the mining, I think they don't really consider the energy cost that their transactions have. And to most of these people, telling them the entire Nano network can be powered by a single wind turbine probably doesn't mean anything. Does there need to be a grassroots movement to push back against wasteful 'mining'? Laypeople concerned about the environmental impact caused by the energy wastage of cryptos often seem to be under the impression that all crypto is necessarily wasteful. How can we get people to care if at the end of the day they just pay a fee and don't get to see the impact? Nano being feeless is one of its biggest strengths but not just because it saves people using it a little bit of money; it's more the fact it cuts out the massive-scale problem of mining. This is hard to get across in a short slogan like "fast, feeless, scalable" though.
submitted by manageablemanatee to nanocurrency [link] [comments]

Sodapoppin and other big streamers are about to get sponsored by one of the biggest online fraudster

Hey to everyone from a throwaway account (I kinda do like my life and not really interested to lose it ;))!
I hope there is still time for me to warn all of you before all this shit blows up and it will no longer be possible to do anything against it. I hope I will be able to prevent all this from happening.
Let´s start from the begging. There is a guy named [Vlad](https://twitter.com/vladtweeting) who is the faunder of a company named [Kickback](https://www.linkedin.com/company/kickback-gg/). At the begging his company provided a 1v1 csgo games for real money. At the time csgo skin gambling became a huge business and Vlad teamed up with FaZe Rain and FaZe Banks and they together worked on a website called csgowild. In 2016 Valve announced that all csgo gambling sites have to shut down immediately (csgowild at the time actually did shut down). Approximately a year after that Vlad created a completely rigged csgo skin opening website called SkinHub. These unreagulated casinos need something called ["provably fair system"](https://en.wikipedia.org/wiki/Provably_fair). SkinHub had one to look legit to their customers, but the code strings never matched and users were being displayed [fake outcomes](https://imgur.com/a/AKEUw) (multiple proves and plus a lot of interesting things to read). This way Vlad was able to earn insane amounts and was able to abuse the power of money to get rid of their competitors offering insane deals to content creators. We are talking up to $300,000 for a single youtube video for a youtuber with 4 million subscribers. He also bought back CsgoWild and also started rigging games there (bots playing against users), which he got [exposed](http://www.twitlonger.com/show/n_1sqclek) the same week he launched the site, haha.
In the past few months Valve was pretty succefull getting rid of those highly illegal gambling websites. All Vlad´s csgo skin sites are now gone, but Vlad is not. He just launched a bitcoin rulette site called [Slip](slip.gg) and many more are ready to come. Considering Vlad´s history, we can expect what is comming on us.
According to my leaks from multiple sources straith from Kickback this will come very soon. Streamers are simply not able to say no to the offers they are currently receiving (again, hundreds of thousands of dollars a month).
Please, try to do something against this yourself, Twitch will NOT do anything. Twitch will NOT ban gambling, nor say anything. Kickback´s [major investor](https://twitter.com/justinkan) was a major investor for Twitch. Wonder why csgo skin gambling was not allowed on Twitch after the PhantomL0rd scandal, but those case opening websites (yes, skinhub) were? Yeah...
Please, again, if we will not be able to stop these sponsorships from happening, just dont play on these sites that will be heavilly advertised, your money will never be safe.
Thanks,
Mr.Z.
submitted by LeakingTime to sodapoppin [link] [comments]

Surae's (me) end-of-November (2017!) update.

You can check it out on the forums here. Here's a copypasta:
Surae's End of November (2017!) Update
Hello, everyone! Sarang posted his update a few days ago to give the community time to review his work before the end of the month. I was hoping to finish multisig off before the end of this month... so I held off on writing this update until then... but it looks like I'm somewhere between 2 days and a week behind on that estimate.
MRL Announcements
Meetings. We are holding weekly meetings on Mondays at 17:00 UTC. Logs are to be posted on my github soon(tm). Usually we alternate between "office hours" and "research meetings." At office hours, we want members of the community to come in and be able to ask questions, so we are considering opening up a relay to the freenode channel during office hours times, unless things get out of hand.
POW-Difficulty Replacement Contest. Some time in December, I am going to formalize an FFS "idea" to open up a multiple-round contest for possible replacements for our proof of work game. The first round would have a 3- or 6-month deadline. Personally, I would love it if this FFS could have an unbounded reward amount. If the community is extremely generous, we could easily whip up a large enough reward to spur lots and lots of interest across the world.
The Bitcoin POW game uses SHA256 to find nonces that produce hashes with sufficiently small digests according to the Bitcoin difficulty metric. Our current POW game uses CryptoNight to find nonces that produce hashes with sufficiently small digests according to the CryptoNote difficulty metric. The winner need not be proof of work. My current thoughts are roughly this:
All submissions will be public. Submissions that minimize incentives for centralized mining (or maximize disincentives) will be preferred over submissions that do not. Submissions that are elegant will be preferred over submissions that are not. Submissions that have provable claims about desirable properties will be preferred over submissions that do not (e.g. for either the Bitcoin or the Monero POW games, the necessary and sufficient network conditions for these games to produce blocks in a Poisson process have not been identified, to my understanding). Submissions that have a smaller environmental impact will be preferred over submissions that have a larger impact. And so on. I would like as many ideas as possible about a judging rubric for the first round. Especially if a large amount of money will be put up as a prize.
The details of the next round would be announced along with the winners of the first round. The reward funds should be released when a set of judges agree on a winner. MRL and Monero Core should each have representation on the panel of judges, and there ought to be at least one independent judge not directly associated with the Monero Project, like Peter Todd, Tim Ruffing, or someone along those lines. But, again, this is just an idea. If the community doesn't like it, we can drop it.
Here is a rundown for November
Multisig. Almost done. I know, I know, it's been forever. We, as a community, have recently come to see how important it is to carefully and formally ensure the correctness of our schemes before proceeding. Multisig is a delicate thing because a naively implemented multisig can reveal information about the participants.
I'm finishing vetting key creation today, finishing signatures tomorrow and the next day. Then I'm passing the result off to moneromooo and luigi to ensure that my description of their code is accurate up to their understanding. Then onto Sarang for final reviews before submission, hopefully by the end of the month. I have my life until Sunday evening blocked off to finish this. A copy of the document will be made available to the community ASAP (an older version is on my github), after more checking and writing is completed.
This whitepaper on multisig will be broken into two papers: one will be intended for peer review describing multi-ring signatures, and one will be a Monero Standard. More about that later...
RTRS RingCT column-linkability and amortization. You may say "what? I thought we were putting RTRS RingCT on the back burner?" Well, I'm still think ing about amortization of signatures. I'm thinking it will be possible (although perhaps not feasible) for miners to include amortized signatures upon finding new blocks. This would allow users to cite an amortized signature for fast verification, but has some possible drawbacks. But more exciting, I'm also chatting with Tim Ruffing, one of the authors on the RTRS RingCT papers: he thinks he has a solution to our "linkability by columns" problem with MLSAG and RingCT. Currently we try to avoid using more than one ring signature per recipient. This avoids linking distinct outputs based on bundling of these ring signatures. Ruffing believes RTRS RingCT can be tweaked to prove several commitments in a vector of commitments; this would allow a single RTRS RingCT to be computed and checked for each output being spent.
Once all the details are checked, I'll write up a document and make a copy of it available to the community. If it works, of course.
Consequences of bulletproofs. In my last end-of-month update I hinted at issues with an exponential space-time trade-off in RTRS RingCT. Due to the speed and space savings with bulletproofs, it may now be feasible to implement RTRS RingCT. With improved verification time savings with bulletproofs we can relax our requirements for verification times for signatures. This will allow the slightly longer verification times of RTRS RingCT to be counter-acted. Solving the problem "what ring sizes can we really get away with?" involves some modeling and solving some linear programming problems (linear programming, or linear optimization, is an anachronistically named area of applied mathematics involved with optimizing logistic problems... see here for more information).
Hence, we will be inserting bulletproofs into Monero with low friction, and then we will look into the logistics of moving to RTRS RingCT.
Monero Standards. Right now, we don't have a comprehensive list of how Monero works, all the various primitives and how they all fit together. Sarang and I have begun working on some Monero Standards that are similar to the original Cryptonote Standards (see here for more information). For each standard, from our hash function on upward, we will describe the standard, provide a justification for Monero's choices in those standards (complete with references), as well as a list of possible replacement standards. For example, our Monero RingCT Standard should describe the RingCT scheme described by shen, which is essentially a ring signature with linear combinations of signing keys + amount commitments. Under the "possible replacements" section, we would describe both the RTRS RingCT scheme and the doubly efficient zk-snark technology as two separate options.
These standards may take awhile to complete, and will be living documents as we change the protocol over the years. In the meantime, it will make it dramatically easier for future researchers to step into MRL and pick up where previous researchers have left off.
Hierarchical view keys. Exploiting the algebra we currently use for computing one-time keys, the sub-address scheme plays with view keys in a certain way, allowing a user to have one single view key for many wallets. Similarly, we may split a view key into several shares, where each subset of shares can be used to grant partial view access to the wallet. A receiver can request that a sender use a particular basepoint in their transaction key where different subsets of shares of the view key grant access to transactions with different basepoints in their transaction keys. None of these are protocol-level observations, they are wallet-level observations. Moreover, these require only that a receiver optionally specify a basepoint.
In other words: hierarchical view keys are a latent feature of our one-time address scheme that has not seen specific development yet. It's a rather low priority compared to the other projects under development; it grants users fine-grained control over their legal compliance, but Monero Standards will have great long-term impact on development and research at Monero.
Criticisms. Monero has suffered some recent criticisms about our hash function. I want to briefly address them.
First, I believe part of the criticism came from a confusion between Keccak3, SHA-3, and Keccak: we have never claimed to use SHA-3 as our hash function, we have only used the Keccak3 hash function, which is a legacy choice inherited from the original CryptoNote reference code. Many developers confuse the two, but Keccak3 was the hash function on which SHA-3 is based. In particular, the Keccak sponge construction can be used to fashion lots and lots of primitives, all of which could fairly be called "Keccak:" both Keccak3 and SHA-3 are Keccak constructions. This may be a subtle nomenclature issue, but it's important because a good portion of our criticisms say "Hey, they aren't using SHA-3!"
Second, I believe part of the criticism also comes from our choice of library, which in my opinion isn't a big deal as long as the library does what it says on the tin. In this case, our hash function is a valid implementation of Keccak3 according to the Keccak3 documentation. The most important criticism, from my point of view, is our choice of pre-SHA-3 Keccak3 as our hash function. Keccak3 underwent lots of analysis during the SHA contest, and Keccak3 is a well-vetted hash funtion. However, it has not been chosen as an international standard. There is a sentiment in the cryptocurrency community to distrust standards, which is probably a healthy sentiment. In this case, however, it means that our choice of hash function is not likely to be supported in common, well-vetted libraries in the future. Moreover, since SHA-3 is an international standard, it shall be undergoing heavy stress testing over the coming decades, a benefit Keccak3 shall not enjoy.
Last month, after some discussions, we made changes to our choice of PRNG in Monero to match the PRNG for Bitcoin. There has since been some discussions instantiated by anonimal about this choice of PRNG. We at MRL are doing our best to assist the core team in weighing the relative costs and benefits of switching to a library like crypto++, and so we believe these criticisms fall into the same category. We intend to address these issues and make formal recommendations in the aforementioned Monero Standards. Sorry for using the word aforementioned.
Things that didn't move much include a) educational outreach, b) SPECTRE, c) anti-ASIC roadmap, d) refund transactions. Most of which was on hold to complete multisig.
As far as educational outreach, I contacted a few members of a few math/cs depts at universities around me, but I haven't gotten anything hopeful yet. I wanted to go local (with respect to me) to make it easier to organize, but that's looking less likely. No matter how enthusiastic of a department we find, garnering participation from faculty members, beginning an application process for new students, squirelling up funding, working out logistics of getting teachers or lecturers/speakers from point A to point B, where to stash students, etc would be a challenge to finish before, say, July. And some schools start their fall semesters in mid-August. So I'm thinking that Summer 2019 is reasonable as the first Monero Summer School... and would be a real fun way to finish off a two-year post-doc!
December plan. I am going to finish multisig, and then finish the zk-lit review with Jeffrey Quesnelle, since these are both slam dunks. Any other time in December I have will be devoted to a) looking into the logistics of using the bulletproofs + RTRS RingCT set-up, b) reading the new zk-stark paper and assessing its importance for Monero, c) beginning work on Monero Standards, which includes addressing our hash function criticisms, our PRNG, etc.
Thank you again! This is an incredible opportunity, and this community is filled with some smart cookies. Every day is a challenge, and I couldn't ask for a more fun thing to be doing with my life right now. I'm hoping that my work ends up making Monero better for you.
submitted by snoether to Monero [link] [comments]

The bug which the "DAO hacker" exploited was *not* "merely in the DAO itself" (ie, *separate* from Ethereum). The bug was in Ethereum's *language design* itself (Solidity / EVM - Ethereum Virtual Machine) - shown by the "recursive call bug discovery" divulged (and dismissed) on slock.it last week.

TL;DR:
I just read the latest post from Emin Gün Sirer, and it basically took him only two lines to say pretty much everything I tried to say in my "wall of text" below:
http://hackingdistributed.com/2016/06/17/thoughts-on-the-dao-hack/
What's a Hack When You Don't Have a Spec?
First of all, I'm not even sure that this qualifies as a hack. To label something as a hack or a bug or unwanted behavior, we need to have a specification of the wanted behavior.
UPDATE: Wow. I just found these other two threads that are making arguments similar to what I'm saying here (but they're much, much more sophisticated than anything I managed to say here). I am very encouraged that people with expertise in functional languages, formal methods, and proof theory are paying attention to Ethereum and cryptocurrencies.
https://np.reddit.com/haskell/comments/4ois15/would_the_smart_formal_methods_people_here_mind/
https://np.reddit.com/ethereum/comments/4oimok/can_we_please_never_again_put_100m_in_a_contract/
Long-term, this kind of stuff is the only way that Ethereum will be able to succeed as a system for high-value smart contracts (like "The DAO" was meant to be).
And long-term, I also think it will be very important for Bitcoin to also use these kinds of approaches. (And doing something like providing a formal specification and a proof of correctness for Bitcoin using Coq + Ocaml, or reimplementing Bitcoin using Ocaml + MirageOS, would be much easier than doing this kind of stuff for Ethereum - since Bitcoin is so much simpler.)
It's also a total culture shock to go into a thread on ethereum - and see it full of real programmers. You never see a thread on r\bitcoin or btc full of real programmers - they've all been chased away by nullc.
Seriously, scroll down through that thread on ethereum linked above. Have you ever seen so many programming heavyweights discussing Bitcoin?
What is it about the Ethereum community where serious programmers feel welcome to comment - but in the Bitcoin community, they don't?
The original OP:
The world already has enough crappy buggy websites based on a mish-mash of error-prone procedural JavaScript - a low-level, procedural language which is notorious for its lack of formal semantics and verification.
JavaScript is such a mess that almost no webdesigners directly program in it any more - they work in one of the many higher-level "JavaScript frameworks", and/or use a higher-level language which "compiles to" JavaScript.
The mere fact that there are so many of these higher-level alternatives simply proves that a low-level language like JavaScript is not useful on its own:
https://duckduckgo.com/?t=disconnect&x=%2Fhtml&q=languages+that+compile+to+javascript&ia=web
https://github.com/jashkenas/coffeescript/wiki/List-of-languages-that-compile-to-JS
JavaScript is the "assembly language" of the web:
https://duckduckgo.com/?q=javascript+assembly+language+web&t=disconnect&ia=web
Every day, you visit websites (on your computer, on your smartphone) where some JavaScript error occurs. The page is displayed incorrectly, and you go on with your life.
There is a reason why crappy error-prone procedural low-level languages like JavaScript aren't used to power nuclear reactors, or missile systems, or X-ray machines - or financial applications.
Programs produced by these crappy low-level procedural languages routinely have bugs.
These languages are only used for unimportant things consumer-facing websites.
(And most of those pages were not even written directly in JavaScript - they used one of those higher-level frameworks / languages in the first links above. But still - the website generated errors.)
What do the Big Boys use?
The US Department of Defense doesn't program missile systems in low-level procedural languages like JavaScript - they use languages like ADA and Spark (and higher-level specification languages like ANNA) - where the language design itself guarantees that things like some ridiculous "recursive call bug" simply cannot happen - and where the use of a specification language forces the programmer to spell out in advance what the program is supposed to do, before digging down into the implementation details of how it's supposed to do it.
And your boring old bank uses declarative workhorses like SQL - where most of the work can be done without even running any procedural code - avoiding the very notion of "recursion" in the first place.
Now, some Ethereum devs put together an investment fund controlling a quarter of a billion dollars - using a language which looks and feels (and runs) a helluva lot like JavaScript: Ethereum's Solidity.
And the whole thing blew up in their face - because the language design of Ethereum's Solidity was total wrong.
Contractual law / human society should not run by these kinds of crappy bug-prone low-level procedural languages.
The Big Boys derive provably correct implementations from very-high-level specifications
Note that "The DAO" had two different "descriptions":
  • An non-binding, high-level, more human-readable one (in ancillary materials, posted separately)
  • A binding, low-level, less human-readable one (the actual code)
This is ok for unimportant projects.
But for important projects, the "high-level, more human-readable" version is actually written in a formal specification language which supports things like automatically deriving the implementation from it (and mathematicaly proving that the implementation is correct - ie, that is satisfies the specification).
So, when using a formal specification language coupled with an implementation language, the two verions of the system are "linked" - ie, the implementation is mechanistically derived from the specification, and formal tools for derivation and validation can be used to mathematically prove that the (less human-readable) implementation has the exact same semantics as the (more human-readable) specification.
How many cryptocurrency scripting kiddies actually know this stuff?
Lots of this stuff is probably foreign to all these scripting kiddies and web designers whose concept of "programming" up till now has basically been "Hey let's slap some JavaScript onto a web page!"
I can assure you - there are many, many programmers who would never touch that world with a ten-foot pole.
They work for the Department of Defense, they work on Wall Street (on back-office systems - handling billions of dollars), they develop software running nuclear reactors or MRI machines - or they do research and development at academic institutions.
For many of these people (in the academic world), even a supposedly "well-defined" and "battle-tested" language like C/C++ is totally "beneath" them.
I have heard theoretical computer scientists, working on DARPA-funded language design projects, say that they wanted to avoid using C/C++ as an implementation language "because it lacks a clearly defined semantics." (These are academics who use things like functional languages, algebraic languages, etc. - which are often more "declarative" in nature, versus the "procedural" languages many casual programmers use).
There is a whole world of programming where not only "GOTO" is ridiculed - but even commonly used procedural constructs "for-loops" and "try/throw/catch" blocks for exceptions are also avoided.
Get serious or GTFO
The only acceptable, serious approach for doing stuff like "smart contracts" or the "The DAO" must be based on much more serious languages than this silly "Solidity" invented by some kid - eg, if we're going to start migrating contractual law onto machines, then the only languages we should be using must:
  • be "functional" (eg, from the family of Haskell/ML) - not procedural languages (eg, C/C++, Java, JavaScript, etc.)
  • support high-level, formal tools for program specification, derivation, and validation
As far as I'm concerned, if we want machines to run our contractual law and financial structures, then the minimal acceptable approach must be:
  • implementing in a functional language like Ocaml (used with great success by Jane Street, a Wall Street firm - check out their videos on YouTube)
  • and long-term, we should think about specifying using a language like Coq (a theorem prover which can be used to derive machine-runnable Ocaml programs/implementations from human-readable specifications).
Kids think the glass is half-full. Pros know it's half-empty.
Maybe all this sounds totally foreign and complicated to today's "scripting kiddies" - the kinds of people like Mark Karepelès who thought he could process hundreds of millions of dollars using that "fractal of bad design" known as PHP - and now Vitalik - who seems like a smart kid, but still, I wonder:
  • how much he's studied up on things like functional languages, or
  • if he's even heard of the Curry-Howard Isomophism, and understands how it can be applied to the problem of developing human-readable specifications (analogous to theorems), and deriving provably correct machine-runnable implementations/programs (analogous to proofs) from them
  • if he's heard of stuff like NATO's 1968 conference on the "software crisis" - which many believe is still not resolved
https://duckduckgo.com/?q=nato+1968+software+crisis&ia=web
https://en.wikipedia.org/wiki/Software_crisis
  • if he's aware of the "AI Winter" - the fact that most researchers consider Artificial Intelligence to be a failure
https://duckduckgo.com/?q=AI+winter&t=disconnect&ia=about
https://en.wikipedia.org/wiki/AI_winter
The above all reflect the fact that computer programming as practiced by most people in the industry today is actually a total fucking disaster.
"Lethal software" is a thing.
http://embeddedgurus.com/barr-code/2014/03/lethal-software-defects-patriot-missile-failure/
"Worse is better" is a (tongue-in-cheek) programming design philosophy.
https://duckduckgo.com/?q=%22worse+is+better%22&t=disconnect&ia=about
"Release early, release often" is an industry slogan - to get your "minimally viable" product out there, despite the fact that it isn't actually ready for prime time yet.
https://duckduckgo.com/?q=%22Release+early%2C+release+often%22&t=disconnect&ia=about
"Waterfall" and "agile" and "Xtreme" and countless other software development and management methodologies have been proposed, out of desperation, to deal with the fact that many programming projects, using popular "procedural" languages, fail.
https://duckduckgo.com/?q=waterfall+agile+xtreme&t=disconnect&ia=web
These methodologies do all work "more-or-less" - but note that they all rely heavily on stuff outside the code (mostly meetings, pep talks, quality assurance testing, etc.) - and they have been proposed out of a dire necessity - the fact that "the code itself" normally does not work right, without continual human prodding from managerial types.
We almost never trust "the code itself" to work properly. Because after a few decades of experience (using these crappy languages), we know that it almost never does.
More examples of failed projects and "lethal software"
  • The newly constructed Denver Airport was held up for years because the developers couldn't get the software right for the baggage handling system.
https://duckduckgo.com/?q=denver+airport+software+failure&t=disconnect&ia=web
  • In one of America's many recent wars (there's so many, I can't keep track of which one that was), over in the Mid-East, the defense systems used against SCUD missiles didn't work - due to software errors.
https://duckduckgo.com/?q=patriot+scud+missiles+software+failure&t=disconnect&ia=web
  • The Ariane rocket (a $7 billion project) blew up - causing $500 in damage.
https://duckduckgo.com/?q=ariane+software+failure&t=disconnect&ia=web
  • The Mars Climate Orbiter burned up in the Martial atmosphere - because the engineers screws up converting between metric and imperial. (By the way, type systems as used functional languages have ways of easily preventing this kind of problem - but in most procecural languages, it's much harder.)
https://duckduckgo.com/?q=mars+climate+orbiter+newtons+&t=disconnect&ia=about
  • The rollout of the healthcare.gov website for Obamacare was a disaster - but to be fair, that involved trying to get hundreds of different backends from all the private insurance companies to talk to each other, so maybe that was to be expected.
https://duckduckgo.com/?q=healthcare.gov+disastrous+rollout&t=disconnect&ia=web
Software development is a mess
The take-away is: software development is a mess - even when it's done by Wall Street or NASA or the Department of Defense, incorporating "functional" languages, or "formal methods" supporting an initial "specification" followed by a derived (and supposedly "provably correct") "implementation".
So... the lesson is... a newly-invented language like Solidity... which people thought was "cool" because it "looked like" JavaScript - is nowhere near the kind of rigorous, absolutely safe level required for handling a quarter of a billion dollars in people's actual wealth.
Vitalik seems like a great guy - but this whole area of "smart conctracts" and "distributed automous organizations" will have to attract many more serious heavyweights from industry and academia before it will be safe enough to run contractual law and financial structures controlling hundreds of millions of dollars in people's actual money and affecting people's actual lives.
Some random links
To give one tiny example (and I'm not saying that Ethereum or "The DAO" necessary has to use this sort of thing - I'm just curious as to what people's backgrounds might be) - does anyone involved with Ethereum or "The DAO" have a passing acquaintance (perhaps from years ago), with historical, related work like the following:
Composing contracts: an adventure in financial engineering - Simon Peyton Jones
http://research.microsoft.com/en-us/um/people/simonpj/Papers/financial-contracts/contracts-icfp.pdf
Caml Trading - Yaron Minsky
https://www.youtube.com/watch?v=hKcOkWzj0_s
Why OCaml - JaneStreet
https://www.youtube.com/watch?v=v1CmGbOGb2I
Just because you're storing stuff in a permissionless blockchain, does not mean you get to ignore all this historical, possibly related work.
In particular, you can go ahead and design a "smart contracts" language to run on you rdecentralized permissionless blockchain. But if your goal is that it should "look like JavaScript" (instead of "acting like Haskell or Ocaml") - then you're probably doing it wrong.
It's about language design
On a final note - it's not about "recursion" or "complexity" or even "avoiding Turing-completeness". Someday, we should be able to have all those things in our smart contracts and DAOs.
What it's really about is language design - including domain-specific languages (DSLs), ideally within a development ecosystem which includes both a high-level specification language, as well as a low-level (machine-runnable) implementation language - where a provably correct program/implementation is mechanistically derived from its specification.
(And by the way, this would have given us a high-level, formal, human-readable, and legally enforceable *specification of "The DAO" - instead of the informal, meaningless, irrelevant English-language "description" which so many suckers fell for - and which the hacker was able to totally ignore and override, when he took the time to read the only "spec" there was: the "implementation", which was in code whose semantics were obvious to almost nobody.)
Language design, formal methods, program derivation and verification, model theory - these are entire fields within theoretical computer science. Is there anyone involved in "smart contracts" and DAOs who knows about this kind of stuff? If so, I think the community would love to hear what they're doing.
Sorry to be "that guy" - but someone has to say it:
Smart contracts and DAOs are going to be a disaster - and cause yet more human suffering in this capitalist system - if we base them on JavaScript-like languages - instead of on state-of-the-art industrial-strength functional languages like Ocaml and Haskell and formally verifiable specification languages like Coq.
submitted by ydtm to btc [link] [comments]

Preventing double-spends is an "embarrassingly parallel" massive search problem - like Google, [email protected], [email protected], or PrimeGrid. BUIP024 "address sharding" is similar to Google's MapReduce & Berkeley's BOINC grid computing - "divide-and-conquer" providing unlimited on-chain scaling for Bitcoin.

TL;DR: Like all other successful projects involving "embarrassingly parallel" search problems in massive search spaces, Bitcoin can and should - and inevitably will - move to a distributed computing paradigm based on successful "sharding" architectures such as Google Search (based on Google's MapReduce algorithm), or [email protected], [email protected], or PrimeGrid (based on Berkeley's BOINC grid computing architecture) - which use simple mathematical "decompose" and "recompose" operations to break big problems into tiny pieces, providing virtually unlimited scaling (plus fault tolerance) at the logical / software level, on top of possibly severely limited (and faulty) resources at the physical / hardware level.
The discredited "heavy" (and over-complicated) design philosophy of centralized "legacy" dev teams such as Core / Blockstream (requiring every single node to download, store and verify the massively growing blockchain, and pinning their hopes on non-existent off-chain vaporware such as the so-called "Lightning Network" which has no mathematical definition and is missing crucial components such as decentralized routing) is doomed to failure, and will be out-competed by simpler on-chain "lightweight" distributed approaches such as distributed trustless Merkle trees or BUIP024's "Address Sharding" emerging from independent devs such as u/thezerg1 (involved with Bitcoin Unlimited).
No one in their right mind would expect Google's vast search engine to fit entirely on a Raspberry Pi behind a crappy Internet connection - and no one in their right mind should expect Bitcoin's vast financial network to fit entirely on a Raspberry Pi behind a crappy Internet connection either.
Any "normal" (ie, competent) company with $76 million to spend could provide virtually unlimited on-chain scaling for Bitcoin in a matter of months - simply by working with devs who would just go ahead and apply the existing obvious mature successful tried-and-true "recipes" for solving "embarrassingly parallel" search problems in massive search spaces, based on standard DISTRIBUTED COMPUTING approaches like Google Search (based on Google's MapReduce algorithm), or [email protected], [email protected], or PrimeGrid (based on Berkeley's BOINC grid computing architecture). The fact that Blockstream / Core devs refuse to consider any standard DISTRIBUTED COMPUTING approaches just proves that they're "embarrassingly stupid" - and the only way Bitcoin will succeed is by routing around their damage.
Proven, mature sharding architectures like the ones powering Google Search, [email protected], [email protected], or PrimeGrid will allow Bitcoin to achieve virtually unlimited on-chain scaling, with minimal disruption to the existing Bitcoin network topology and mining and wallet software.
Longer Summary:
People who argue that "Bitcoin can't scale" - because it involves major physical / hardware requirements (lots of processing power, upload bandwidth, storage space) - are at best simply misinformed or incompetent - or at worst outright lying to you.
Bitcoin mainly involves searching the blockchain to prevent double-spends - and so it is similar to many other projects involving "embarrassingly parallel" searching in massive search spaces - like Google Search, [email protected], [email protected], or PrimeGrid.
But there's a big difference between those long-running wildly successful massively distributed infinitely scalable parallel computing projects, and Bitcoin.
Those other projects do their data storage and processing across a distributed network. But Bitcoin (under the misguided "leadership" of Core / Blockstream devs) instists on a fatally flawed design philosophy where every individual node must be able to download, store and verify the system's entire data structure. And it's even wore than that - they want to let the least powerful nodes in the system dictate the resource requirements for everyone else.
Meanwhile, those other projects are all based on some kind of "distributed computing" involving "sharding". They achieve massive scaling by adding a virtually unlimited (and fault-tolerant) logical / software layer on top of the underlying resource-constrained / limited physical / hardware layer - using approaches like Google's MapReduce algorithm or Berkeley's Open Infrastructure for Network Computing (BOINC) grid computing architecture.
This shows that it is a fundamental error to continue insisting on viewing an individual Bitcoin "node" as the fundamental "unit" of the Bitcoin network. Coordinated distributed pools already exist for mining the blockchain - and eventually coordinated distributed trustless architectures will also exist for verifying and querying it. Any architecture or design philosophy where a single "node" is expected to be forever responsible for storing or verifying the entire blockchain is the wrong approach, and is doomed to failure.
The most well-known example of this doomed approach is Blockstream / Core's "roadmap" - which is based on two disastrously erroneous design requirements:
  • Core / Blockstream erroneously insist that the entire blockchain must always be downloadable, storable and verifiable on a single node, as dictated by the least powerful nodes in the system (eg, u/bitusher in Costa Rica), or u/Luke-Jr in the underserved backwoods of Florida); and
  • Core / Blockstream support convoluted, incomplete off-chain scaling approaches such as the so-called "Lightning Network" - which lacks a mathematical foundation, and also has some serious gaps (eg, no solution for decentralized routing).
Instead, the future of Bitcoin will inevitably be based on unlimited on-chain scaling, where all of Bitcoin's existing algorithms and data structures and networking are essentially preserved unchanged / as-is - but they are distributed at the logical / software level using sharding approaches such as u/thezerg1's BUIP024 or distributed trustless Merkle trees.
These kinds of sharding architectures will allow individual nodes to use a minimum of physical resources to access a maximum of logical storage and processing resources across a distributed network with virtually unlimited on-chain scaling - where every node will be able to use and verify the entire blockchain without having to download and store the whole thing - just like Google Search, [email protected], [email protected], or PrimeGrid and other successful distributed sharding-based projects have already been successfully doing for years.
Details:
Sharding, which has been so successful in many other areas, is a topic that keeps resurfacing in various shapes and forms among independent Bitcoin developers.
The highly successful track record of sharding architectures on other projects involving "embarrassingly parallel" massive search problems (harnessing resource-constrained machines at the physical level into a distributed network at the logical level, in order to provide fault tolerance and virtually unlimited scaling searching for web pages, interstellar radio signals, protein sequences, or prime numbers in massive search spaces up to hundreds of terabytes in size) provides convincing evidence that sharding architectures will also work for Bitcoin (which also requires virtually unlimited on-chain scaling, searching the ever-expanding blockchain for previous "spends" from an existing address, before appending a new transaction from this address to the blockchain).
Below are some links involving proposals for sharding Bitcoin, plus more discussion and related examples.
BUIP024: Extension Blocks with Address Sharding
https://np.reddit.com/btc/comments/54afm7/buip024_extension_blocks_with_address_sharding/
Why aren't we as a community talking about Sharding as a scaling solution?
https://np.reddit.com/Bitcoin/comments/3u1m36/why_arent_we_as_a_community_talking_about/
(There are some detailed, partially encouraging comments from u/petertodd in that thread.)
[Brainstorming] Could Bitcoin ever scale like BitTorrent, using something like "mempool sharding"?
https://np.reddit.com/btc/comments/3v070a/brainstorming_could_bitcoin_ever_scale_like/
[Brainstorming] "Let's Fork Smarter, Not Harder"? Can we find some natural way(s) of making the scaling problem "embarrassingly parallel", perhaps introducing some hierarchical (tree) structures or some natural "sharding" at the level of the network and/or the mempool and/or the blockchain?
https://np.reddit.com/btc/comments/3wtwa7/brainstorming_lets_fork_smarter_not_harder_can_we/
"Braiding the Blockchain" (32 min + Q&A): We can't remove all sources of latency. We can redesign the "chain" to tolerate multiple simultaneous writers. Let miners mine and validate at the same time. Ideal block time / size / difficulty can become emergent per-node properties of the network topology
https://np.reddit.com/btc/comments/4su1gf/braiding_the_blockchain_32_min_qa_we_cant_remove/
Some kind of sharding - perhaps based on address sharding as in BUIP024, or based on distributed trustless Merkle trees as proposed earlier by u/thezerg1 - is very likely to turn out to be the simplest, and safest approach towards massive on-chain scaling.
A thought experiment showing that we already have most of the ingredients for a kind of simplistic "instant sharding"
A simplistic thought experiment can be used to illustrate how easy it could be to do sharding - with almost no changes to the existing Bitcoin system.
Recall that Bitcoin addresses and keys are composed from an alphabet of 58 characters. So, in this simplified thought experiment, we will outline a way to add a kind of "instant sharding" within the existing system - by using the last character of each address in order to assign that address to one of 58 shards.
(Maybe you can already see where this is going...)
Similar to vanity address generation, a user who wants to receive Bitcoins would be required to generate 58 different receiving addresses (each ending with a different character) - and, similarly, miners could be required to pick one of the 58 shards to mine on.
Then, when a user wanted to send money, they would have to look at the last character of their "send from" address - and also select a "send to" address ending in the same character - and presto! we already have a kind of simplistic "instant sharding". (And note that this part of the thought experiment would require only the "softest" kind of soft fork: indeed, we haven't changed any of the code at all, but instead we simply adopted a new convention by agreement, while using the existing code.)
Of course, this simplistic "instant sharding" example would still need a few more features in order to be complete - but they'd all be fairly straightforward to provide:
  • A transaction can actually send from multiple addresses, to multiple addresses - so the approach of simply looking at the final character of a single (receive) address would not be enough to instantly assign a transaction to a particular shard. But a slightly more sophisticated decision criterion could easily be developed - and computed using code - to assign every transaction to a particular shard, based on the "from" and "to" addresses in the transaction. The basic concept from the "simplistic" example would remain the same, sharding the network based on some characteristic of transactions.
  • If we had 58 shards, then the mining reward would have to be decreased to 1/58 of what it currently is - and also the mining hash power on each of the shards would end up being roughly 1/58 of what it is now. In general, many people might agree that decreased mining rewards would actually be a good thing (spreading out mining rewards among more people, instead of the current problems where mining is done by about 8 entities). Also, network hashing power has been growing insanely for years, so we probably have way more than enough needed to secure the network - after all, Bitcoin was secure back when network hash power was 1/58 of what it is now.
  • This simplistic example does not handle cases where you need to do "cross-shard" transactions. But it should be feasible to implement such a thing. The various proposals from u/thezerg1 such as BUIP024 do deal with "cross-shard" transactions.
(Also, the fact that a simplified address-based sharding mechanics can be outlined in just a few paragraphs as shown here suggests that this might be "simple and understandable enough to actually work" - unlike something such as the so-called "Lightning Network", which is actually just a catchy-sounding name with no clearly defined mechanics or mathematics behind it.)
Addresses are plentiful, and can be generated locally, and you can generate addresses satisfying a certain pattern (eg ending in a certain character) the same way people can already generate vanity addresses. So imposing a "convention" where the "send" and "receive" address would have to end in the same character (and where the miner has to only mine transactions in that shard) - would be easy to understand and do.
Similarly, the earlier solution proposed by u/thezerg1, involving distributed trustless Merkle trees, is easy to understand: you'd just be distributing the Merkle tree across multiple nodes, while still preserving its immutablity guarantees.
Such approaches don't really change much about the actual system itself. They preserve the existing system, and just split its data structures into multiple pieces, distributed across the network. As long as we have the appropriate operators for decomposing and recomposing the pieces, then everything should work the same - but more efficiently, with unlimited on-chain scaling, and much lower resource requirements.
The examples below show how these kinds of "sharding" approaches have already been implemented successfully in many other systems.
Massive search is already efficiently performed with virtually unlimited scaling using divide-and-conquer / decompose-and-recompose approaches such as MapReduce and BOINC.
Every time you do a Google search, you're using Google's MapReduce algorithm to solve an embarrassingly parallel problem.
And distributed computing grids using the Berkeley Open Infrastructure for Network Computing (BOINC) are constantly setting new records searching for protein combinations, prime numbers, or radio signals from possible intelligent life in the universe.
We all use Google to search hundreds of terabytes of data on the web and get results in a fraction of a second - using cheap "commodity boxes" on the server side, and possibly using limited bandwidth on the client side - with fault tolerance to handle crashing servers and dropped connections.
Other examples are [email protected], [email protected] and PrimeGrid - involving searching massive search spaces for protein sequences, interstellar radio signals, or prime numbers hundreds of thousands of digits long. Each of these examples uses sharding to decompose a giant search space into smaller sub-spaces which are searched separately in parallel and then the resulting (sub-)solutions are recomposed to provide the overall search results.
It seems obvious to apply this tactic to Bitcoin - searching the blockchain for existing transactions involving a "send" from an address, before appending a new "send" transaction from that address to the blockchain.
Some people might object that those systems are different from Bitcoin.
But we should remember that preventing double-spends (the main thing that the Bitcoin does) is, after all, an embarrassingly parallel massive search problem - and all of these other systems also involve embarrassingly parallel massive search problems.
The mathematics of Google's MapReduce and Berkeley's BOINC is simple, elegant, powerful - and provably correct.
Google's MapReduce and Berkeley's BOINC have demonstrated that in order to provide massive scaling for efficient searching of massive search spaces, all you need is...
  • an appropriate "decompose" operation,
  • an appropriate "recompose" operation,
  • the necessary coordination mechanisms
...in order to distribute a single problem across multiple, cheap, fault-tolerant processors.
This allows you to decompose the problem into tiny sub-problems, solving each sub-problem to provide a sub-solution, and then recompose the sub-solutions into the overall solution - gaining virtually unlimited scaling and massive efficiency.
The only "hard" part involves analyzing the search space in order to select the appropriate DECOMPOSE and RECOMPOSE operations which guarantee that recomposing the "sub-solutions" obtained by decomposing the original problem is equivalent to the solving the original problem. This essential property could be expressed in "pseudo-code" as follows:
  • (DECOMPOSE ; SUB-SOLVE ; RECOMPOSE) = (SOLVE)
Selecting the appropriate DECOMPOSE and RECOMPOSE operations (and implementing the inter-machine communication coordination) can be somewhat challenging, but it's certainly doable.
In fact, as mentioned already, these things have already been done in many distributed computing systems. So there's hardly any "original work to be done in this case. All we need to focus on now is translating the existing single-processor architecture of Bitcoin to a distributed architecture, adopting the mature, proven, efficient "recipes" provided by the many examples of successful distributed systems already up and running like such as Google Search (based on Google's MapReduce algorithm), or [email protected], [email protected], or PrimeGrid (based on Berkeley's BOINC grid computing architecture).
That's what any "competent" company with $76 million to spend would have done already - simply work with some devs who know how to implement open-source distributed systems, and focus on adapting Bitcoin's particular data structures (merkle trees, hashed chains) to a distributed environment. That's a realistic roadmap that any team of decent programmers with distributed computing experience could easily implement in a few months, and any decent managers could easily manage and roll out on a pre-determined schedule - instead of all these broken promises and missed deadlines and non-existent vaporware and pathetic excuses we've been getting from the incompetent losers and frauds involved with Core / Blockstream.
ASIDE: MapReduce and BOINC are based on math - but the so-called "Lightning Network" is based on wishful thinking involving kludges on top of workarounds on top of hacks - which is how you can tell that LN will never work.
Once you have succeeded in selecting the appropriate mathematical DECOMPOSE and RECOMPOSE operations, you get simple massive scaling - and it's also simple for anyone to verify that these operations are correct - often in about a half-page of math and code.
An example of this kind of elegance and brevity (and provable correctness) involving compositionality can be seen in this YouTube clip by the accomplished mathematician Lucius Greg Meredith presenting some operators for scaling Ethereum - in just a half page of code:
https://youtu.be/uzahKc_ukfM?t=1101
Conversely, if you fail to select the appropriate mathematical DECOMPOSE and RECOMPOSE operations, then you end up with a convoluted mess of wishful thinking - like the "whitepaper" for the so-called "Lightning Network", which is just a cool-sounding name with no actual mathematics behind it.
The LN "whitepaper" is an amateurish, non-mathematical meandering mishmash of 60 pages of "Alice sends Bob" examples involving hacks on top of workarounds on top of kludges - also containing a fatal flaw (a lack of any proposed solution for doing decentralized routing).
The disaster of the so-called "Lightning Network" - involving adding never-ending kludges on top of hacks on top of workarounds (plus all kinds of "timing" dependencies) - is reminiscent of the "epicycles" which were desperately added in a last-ditch attempt to make Ptolemy's "geocentric" system work - based on the incorrect assumption that the Sun revolved around the Earth.
This is how you can tell that the approach of the so-called "Lightning Network" is simply wrong, and it would never work - because it fails to provide appropriate (and simple, and provably correct) mathematical DECOMPOSE and RECOMPOSE operations in less than a single page of math and code.
Meanwhile, sharding approaches based on a DECOMPOSE and RECOMPOSE operation are simple and elegant - and "functional" (ie, they don't involve "procedural" timing dependencies like keeping your node running all the time, or closing out your channel before a certain deadline).
Bitcoin only has 6,000 nodes - but the leading sharding-based projects have over 100,000 nodes, with no financial incentives.
Many of these sharding-based projects have many more nodes than the Bitcoin network.
The Bitcoin network currently has about 6,000 nodes - even though there are financial incentives for running a node (ie, verifying your own Bitcoin balance.
[email protected] and [email protected] each have over 100,000 active users - even though these projects don't provide any financial incentives. This higher number of users might be due in part the the low resource demands required in these BOINC-based projects, which all are based on sharding the data set.
[email protected]
As part of the client-server network architecture, the volunteered machines each receive pieces of a simulation (work units), complete them, and return them to the project's database servers, where the units are compiled into an overall simulation.
In 2007, Guinness World Records recognized [email protected] as the most powerful distributed computing network. As of September 30, 2014, the project has 107,708 active CPU cores and 63,977 active GPUs for a total of 40.190 x86 petaFLOPS (19.282 native petaFLOPS). At the same time, the combined efforts of all distributed computing projects under BOINC totals 7.924 petaFLOPS.
[email protected]
Using distributed computing, [email protected] sends the millions of chunks of data to be analyzed off-site by home computers, and then have those computers report the results. Thus what appears an onerous problem in data analysis is reduced to a reasonable one by aid from a large, Internet-based community of borrowed computer resources.
Observational data are recorded on 2-terabyte SATA hard disk drives at the Arecibo Observatory in Puerto Rico, each holding about 2.5 days of observations, which are then sent to Berkeley. Arecibo does not have a broadband Internet connection, so data must go by postal mail to Berkeley. Once there, it is divided in both time and frequency domains work units of 107 seconds of data, or approximately 0.35 megabytes (350 kilobytes or 350,000 bytes), which overlap in time but not in frequency. These work units are then sent from the [email protected] server over the Internet to personal computers around the world to analyze.
Data is merged into a database using [email protected] computers in Berkeley.
The [email protected] distributed computing software runs either as a screensaver or continuously while a user works, making use of processor time that would otherwise be unused.
Active users: 121,780 (January 2015)
PrimeGrid
PrimeGrid is a distributed computing project for searching for prime numbers of world-record size. It makes use of the Berkeley Open Infrastructure for Network Computing (BOINC) platform.
Active users 8,382 (March 2016)
MapReduce
A MapReduce program is composed of a Map() procedure (method) that performs filtering and sorting (such as sorting students by first name into queues, one queue for each name) and a Reduce() method that performs a summary operation (such as counting the number of students in each queue, yielding name frequencies).
How can we go about developing sharding approaches for Bitcoin?
We have to identify a part of the problem which is in some sense "invariant" or "unchanged" under the operations of DECOMPOSE and RECOMPOSE - and we also have to develop a coordination mechanism which orchestrates the DECOMPOSE and RECOMPOSE operations among the machines.
The simplistic thought experiment above outlined an "instant sharding" approach where we would agree upon a convention where the "send" and "receive" address would have to end in the same character - instantly providing a starting point illustrating some of the mechanics of an actual sharding solution.
BUIP024 involves address sharding and deals with the additional features needed for a complete solution - such as cross-shard transactions.
And distributed trustless Merkle trees would involve storing Merkle trees across a distributed network - which would provide the same guarantees of immutability, while drastically reducing storage requirements.
So how can we apply ideas like MapReduce and BOINC to providing massive on-chain scaling for Bitcoin?
First we have to examine the structure of the problem that we're trying to solve - and we have to try to identify how the problem involves a massive search space which can be decomposed and recomposed.
In the case of Bitcoin, the problem involves:
  • sequentializing (serializing) APPEND operations to a blockchain data structure
  • in such a way as to avoid double-spends
Can we view "preventing Bitcoin double-spends" as a "massive search space problem"?
Yes we can!
Just like Google efficiently searches hundreds of terabytes of web pages for a particular phrase (and [email protected], [email protected], PrimeGrid etc. efficiently search massive search spaces for other patterns), in the case of "preventing Bitcoin double-spends", all we're actually doing is searching a massive seach space (the blockchain) in order to detect a previous "spend" of the same coin(s).
So, let's imagine how a possible future sharding-based architecture of Bitcoin might look.
We can observe that, in all cases of successful sharding solutions involving searching massive search spaces, the entire data structure is never stored / searched on a single machine.
Instead, the DECOMPOSE and RECOMPOSE operations (and the coordination mechanism) a "virtual" layer or grid across multiple machines - allowing the data structure to be distributed across all of them, and allowing users to search across all of them.
This suggests that requiring everyone to store 80 Gigabytes (and growing) of blockchain on their own individual machine should no longer be a long-term design goal for Bitcoin.
Instead, in a sharding environment, the DECOMPOSE and RECOMPOSE operations (and the coordination mechanism) should allow everyone to only store a portion of the blockchain on their machine - while also allowing anyone to search the entire blockchain across everyone's machines.
This might involve something like BUIP024's "address sharding" - or it could involve something like distributed trustless Merkle trees.
In either case, it's easy to see that the basic data structures of the system would remain conceptually unaltered - but in the sharding approaches, these structures would be logically distributed across multiple physical devices, in order to provide virtually unlimited scaling while dramatically reducing resource requirements.
This would be the most "conservative" approach to scaling Bitcoin: leaving the data structures of the system conceptually the same - and just spreading them out more, by adding the appropriately defined mathematical DECOMPOSE and RECOMPOSE operators (used in successful sharding approaches), which can be easily proven to preserve the same properties as the original system.
Conclusion
Bitcoin isn't the only project in the world which is permissionless and distributed.
Other projects (BOINC-based permisionless decentralized [email protected], [email protected], and PrimeGrid - as well as Google's (permissioned centralized) MapReduce-based search engine) have already achieved unlimited scaling by providing simple mathematical DECOMPOSE and RECOMPOSE operations (and coordination mechanisms) to break big problems into smaller pieces - without changing the properties of the problems or solutions. This provides massive scaling while dramatically reducing resource requirements - with several projects attracting over 100,000 nodes, much more than Bitcoin's mere 6,000 nodes - without even offering any of Bitcoin's financial incentives.
Although certain "legacy" Bitcoin development teams such as Blockstream / Core have been neglecting sharding-based scaling approaches to massive on-chain scaling (perhaps because their business models are based on misguided off-chain scaling approaches involving radical changes to Bitcoin's current successful network architecture, or even perhaps because their owners such as AXA and PwC don't want a counterparty-free new asset class to succeed and destroy their debt-based fiat wealth), emerging proposals from independent developers suggest that on-chain scaling for Bitcoin will be based on proven sharding architectures such as MapReduce and BOINC - and so we should pay more attention to these innovative, independent developers who are pursuing this important and promising line of research into providing sharding solutions for virtually unlimited on-chain Bitcoin scaling.
submitted by ydtm to btc [link] [comments]

Deconstructing the Blockchain to Approach Physical Limits

arXiv:1810.08092
Date: 2018-11-08
Author(s): Vivek Bagaria, Sreeram Kannan, David Tse, Giulia Fanti, Pramod Viswanath

Link to Paper


Abstract
Transaction throughput, confirmation latency and confirmation reliability are fundamental performance measures of any blockchain system in addition to its security. In a decentralized setting, these measures are limited by two underlying physical network attributes: communication capacity and speed-of-light propagation delay. Existing systems operate far away from these physical limits. In this work we introduce Prism, a new proof-of-work blockchain protocol, which can achieve 1) security against up to 50% adversarial hashing power; 2) optimal throughput up to the capacity C of the network; 3) confirmation latency for honest transactions proportional to the propagation delay D, with confirmation error probability exponentially small in CD ; 4) eventual total ordering of all transactions. Our approach to the design of this protocol is based on deconstructing the blockchain into its basic functionalities and systematically scaling up these functionalities to approach their physical limits.

References
  1. Ethereum Wiki proof of stake faqs: Grinding attacks. https://github.com/ethereum/wiki/wiki/Proof-of-Stake-FAQs.
  2. David Aldous and Jim Fill. Reversible markov chains and random walks on graphs, 2002.
  3. Gavin Andresen. Weak block thoughts... bitcoin-dev. https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2015-Septembe011157.html.
  4. Vivek Bagaria, Giulia Fanti, Sreeram Kannan, David Tse, and Pramod Viswanath. Prism++: a throughput-latency-security-incentive optimal proof of stake blockchain algorithm. In Working paper, 2018.
  5. Vitalik Buterin. On slow and fast block times, 2015. https://blog.ethereum.org/2015/09/14/on-slow-and-fast-block-times/.
  6. Alex de Vries. Bitcoin’s growing energy problem. Joule, 2(5):801–805, 2018.
  7. C. Decker and R. Wattenhofer. Information propagation in the bitcoin network. In IEEE P2P 2013 Proceedings, pages 1–10, Sept 2013.
  8. Ittay Eyal, Adem Efe Gencer, Emin G¨un Sirer, and Robbert Van Renesse. Bitcoinng: A scalable blockchain protocol. In NSDI, pages 45–59, 2016.
  9. Ittay Eyal and Emin G¨un Sirer. Majority is not enough: Bitcoin mining is vulnerable. Communications of the ACM, 61(7):95–102, 2018.
  10. Juan Garay, Aggelos Kiayias, and Nikos Leonardos. The bitcoin backbone protocol: Analysis and applications. In Annual International Conference on the Theory and Applications of Cryptographic Techniques, pages 281–310. Springer, 2015.
  11. Dina Katabi, Mark Handley, and Charlie Rohrs. Congestion control for high bandwidth-delay product networks. ACM SIGCOMM computer communication review, 32(4):89–102, 2002.
  12. Aggelos Kiayias, Alexander Russell, Bernardo David, and Roman Oliynykov. Ouroboros: A provably secure proof-of-stake blockchain protocol. In Annual International Cryptology Conference, pages 357–388. Springer, 2017.
  13. Uri Klarman, Soumya Basu, Aleksandar Kuzmanovic, and Emin G¨un Sirer. bloxroute: A scalable trustless blockchain distribution network whitepaper.
  14. Yoad Lewenberg, Yoram Bachrach, Yonatan Sompolinsky, Aviv Zohar, and Jeffrey S Rosenschein. Bitcoin mining pools: A cooperative game theoretic analysis. In Proceedings of the 2015 International Conference on Autonomous Agents and Multiagent Systems, pages 919–927. International Foundation for Autonomous Agents and Multiagent Systems, 2015.
  15. Yoad Lewenberg, Yonatan Sompolinsky, and Aviv Zohar. Inclusive block chain protocols. In International Conference on Financial Cryptography and Data Security, pages 528–547. Springer, 2015.
  16. Chenxing Li, Peilun Li, Wei Xu, Fan Long, and Andrew Chi-chih Yao. Scaling nakamoto consensus to thousands of transactions per second. arXiv preprint arXiv:1805.03870, 2018.
  17. Wenting Li, S´ebastien Andreina, Jens-Matthias Bohli, and Ghassan Karame. Securing proof-of-stake blockchain protocols. In Data Privacy Management, Cryptocurrencies and Blockchain Technology, pages 297–315. Springer, 2017.
  18. Satoshi Nakamoto. Bitcoin: A peer-to-peer electronic cash system. 2008.
  19. Christopher Natoli and Vincent Gramoli. The balance attack against proof-of-work blockchains: The r3 testbed as an example. arXiv preprint arXiv:1612.09426, 2016.
  20. Kartik Nayak, Srijan Kumar, Andrew Miller, and Elaine Shi. Stubborn mining: Generalizing selfish mining and combining with an eclipse attack. In Security and Privacy (EuroS&P), 2016 IEEE European Symposium on, pages 305–320. IEEE, 2016.
  21. Rafael Pass, Lior Seeman, and Abhi Shelat. Analysis of the blockchain protocol in asynchronous networks. In Annual International Conference on the Theory and Applications of Cryptographic Techniques, pages 643–673. Springer, 2017.
  22. Rafael Pass and Elaine Shi. Fruitchains: A fair blockchain. In Proceedings of the ACM Symposium on Principles of Distributed Computing. ACM, 2017.
  23. Rafael Pass and Elaine Shi. Hybrid consensus: Efficient consensus in the permissionless model. In LIPIcs-Leibniz International Proceedings in Informatics, volume 91. Schloss Dagstuhl-Leibniz-Zentrum fuer Informatik, 2017.
  24. Rafael Pass and Elaine Shi. Thunderella: Blockchains with optimistic instant confirmation. In Annual International Conference on the Theory and Applications of Cryptographic Techniques, pages 3–33. Springer, 2018.
  25. Peter R Rizun. Subchains: A technique to scale bitcoin and improve the user experience. Ledger, 1:38–52, 2016.
  26. Ayelet Sapirshtein, Yonatan Sompolinsky, and Aviv Zohar. Optimal selfish mining strategies in bitcoin. In International Conference on Financial Cryptography and Data Security, pages 515–532. Springer, 2016.
  27. Y Sompolinsky and A Zohar. Phantom: A scalable blockdag protocol, 2018.
  28. Yonatan Sompolinsky, Yoad Lewenberg, and Aviv Zohar. Spectre: A fast and scalable cryptocurrency protocol. IACR Cryptology ePrint Archive, 2016:1159, 2016.
  29. Yonatan Sompolinsky and Aviv Zohar. Secure high-rate transaction processing in bitcoin. In International Conference on Financial Cryptography and Data Security, pages 507–527. Springer, 2015.
  30. Statoshi. Bandwidth usage. https://statoshi.info/dashboard/db/bandwidth-usage.
  31. TierNolan. Decoupling transactions and pow. Bitcoin Forum. https://bitcointalk.org/index.php?topic=179598.0.
submitted by dj-gutz to myrXiv [link] [comments]

Welcome to /r/millionairemakers! Here's a one minute summary of what we're all about.

Welcome, everyone, to /millionairemakers!
We stemmed from the idea that if a million people gave a dollar to someone, we could make a millionaire. Since then, we've donated over $47,000 to 14 past winners.
We hold monthly drawings where anyone can comment on to enter. Then, a winner is randomly chosen using the bitcoin blockchain (read more about our provably fair drawing method here). Once a winner is chosen, everyone donates a dollar via Paypal, Google Wallet, Bitcoin, or Changetip.
Our 15th winner, silversonic99, has just been chosen. This is our chance to change someone's life. To donate, either create a Changetip account, or wait until the winner's post is up. Click here to be reminded to check it out!
We want to highlight that this is not a lottery. No one puts any money up beforehand, and all donations are completely voluntary. We focus on the ability to change someone else's life.
To be reminded to enter drawings and donate to winners:
See here for the full FAQ.
Please ask any other questions you have :)
submitted by Paltry_Digger to millionairemakers [link] [comments]

Flux: Revisiting Near Blocks for Proof-of-Work Blockchains

Cryptology ePrint Archive: Report 2018/415
Date: 2018-05-29
Author(s): Alexei Zamyatin∗, Nicholas Stifter, Philipp Schindler, Edgar Weippl, William J. Knottenbelt∗

Link to Paper


Abstract
The term near or weak blocks describes Bitcoin blocks whose PoW does not meet the required target difficulty to be considered valid under the regular consensus rules of the protocol. Near blocks are generally associated with protocol improvement proposals striving towards shorter transaction confirmation times. Existing proposals assume miners will act rationally based solely on intrinsic incentives arising from the adoption of these changes, such as earlier detection of blockchain forks.
In this paper we present Flux, a protocol extension for proof-of-work blockchains that leverages on near blocks, a new block reward distribution mechanism, and an improved branch selection policy to incentivize honest participation of miners. Our protocol reduces mining variance, improves the responsiveness of the underlying blockchain in terms of transaction processing, and can be deployed without conflicting modifications to the underlying base protocol as a velvet fork. We perform an initial analysis of selfish mining which suggests Flux not only provides security guarantees similar to pure Nakamoto consensus, but potentially renders selfish mining strategies less profitable.

References
[1] Bitcoin Cash. https://www.bitcoincash.org/. Accessed: 2017-01-24.
[2] P2pool. http://p2pool.org/. Accessed: 2017-05-10.
[3] G. Andersen. Comment in ”faster blocks vs bigger blocks”. https://bitcointalk.org/index.php?topic=673415.msg7658481#msg7658481, 2014. Accessed: 2017-05-10.
[4] G. Andersen. [bitcoin-dev] weak block thoughts... https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2015-Septembe011157.html, 2015. Accessed: 2017-05-10.
[5] E. Androulaki, S. Capkun, and G. O. Karame. Two bitcoins at the price of one? double-spending attacks on fast payments in bitcoin. In CCS, 2012.
[6] J. Becker, D. Breuker, T. Heide, J. Holler, H. P. Rauer, and R. Bohme. ¨ Can we afford integrity by proof-of-work? scenarios inspired by the bitcoin currency. In WEIS. Springer, 2012.
[7] I. Bentov, R. Pass, and E. Shi. Snow white: Provably secure proofs of stake. https://eprint.iacr.org/2016/919.pdf, 2016. Accessed: 2016-11-08.
[8] Bitcoin community. OP RETURN. https://en.bitcoin.it/wiki/OP\RETURN. Accessed: 2017-05-10.
[9] Bitcoin Wiki. Merged mining specification. [https://en.bitcoin.it/wiki/Merged\](https://en.bitcoin.it/wiki/Merged)) mining\ specification. Accessed: 2017-05-10.
[10] Blockchain.info. Hashrate Distribution in Bitcoin. https://blockchain.info/de/pools. Accessed: 2017-05-10.
[11] Blockchain.info. Unconfirmed bitcoin transactions. https://blockchain.info/unconfirmed-transactions. Accessed: 2017-05-10.
[12] J. Bonneau, A. Miller, J. Clark, A. Narayanan, J. A. Kroll, and E. W. Felten. Sok: Research perspectives and challenges for bitcoin and cryptocurrencies. In IEEE Symposium on Security and Privacy, 2015.
[13] V. Buterin. Ethereum: A next-generation smart contract and decentralized application platform. https://github.com/ethereum/wiki/wiki/White-Paper, 2014. Accessed: 2016-08-22.
[14] C. Decker and R. Wattenhofer. Information propagation in the bitcoin network. In Peer-to-Peer Computing (P2P), 2013 IEEE Thirteenth International Conference on, pages 1–10. IEEE, 2013.
[15] J. R. Douceur. The sybil attack. In International Workshop on Peer-toPeer Systems, pages 251–260. Springer, 2002.
[16] I. Eyal, A. E. Gencer, E. G. Sirer, and R. Renesse. Bitcoin-ng: A scalable blockchain protocol. In 13th USENIX Security Symposium on Networked Systems Design and Implementation (NSDI’16). USENIX Association, Mar 2016.
[17] I. Eyal and E. G. Sirer. Majority is not enough: Bitcoin mining is vulnerable. In Financial Cryptography and Data Security, pages 436–454. Springer, 2014.
[18] J. Garay, A. Kiayias, and N. Leonardos. The bitcoin backbone protocol: Analysis and applications. In Advances in Cryptology-EUROCRYPT 2015, pages 281–310. Springer, 2015.
[19] A. E. Gencer, S. Basu, I. Eyal, R. Renesse, and E. G. Sirer. Decentralization in bitcoin and ethereum networks. In Proceedings of the 22nd International Conference on Financial Cryptography and Data Security (FC). Springer, 2018.
[20] A. Gervais, G. Karame, S. Capkun, and V. Capkun. Is bitcoin a decentralized currency? volume 12, pages 54–60, 2014.
[21] A. Gervais, G. O. Karame, K. Wust, V. Glykantzis, H. Ritzdorf, ¨ and S. Capkun. On the security and performance of proof of work blockchains. https://eprint.iacr.org/2016/555.pdf, 2016. Accessed: 2016-08-10.
[22] M. Jakobsson and A. Juels. Proofs of work and bread pudding protocols. In Secure Information Networks, pages 258–272. Springer, 1999.
[23] A. Judmayer, A. Zamyatin, N. Stifter, A. G. Voyiatzis, and E. Weippl. Merged mining: Curse or cure? In CBT’17: Proceedings of the International Workshop on Cryptocurrencies and Blockchain Technology, Sep 2017.
[24] G. O. Karame, E. Androulaki, M. Roeschlin, A. Gervais, and S. Capkun. ˇ Misbehavior in bitcoin: A study of double-spending and accountability. volume 18, page 2. ACM, 2015.
[25] A. Kiayias, A. Miller, and D. Zindros. Non-interactive proofs of proof-of-work. Cryptology ePrint Archive, Report 2017/963, 2017. Accessed:2017-10-03.
[26] A. Kiayias, A. Russell, B. David, and R. Oliynykov. Ouroboros: A provably secure proof-of-stake blockchain protocol. In Annual International Cryptology Conference, pages 357–388. Springer, 2017.
[27] Y. Lewenberg, Y. Sompolinsky, and A. Zohar. Inclusive block chain protocols. In Financial Cryptography and Data Security, pages 528–547. Springer, 2015.
[28] Litecoin community. Litecoin reference implementation. https://github.com/litecoin-project/litecoin. Accessed: 2018-05-03.
[29] G. Maxwell. Comment in ”[bitcoin-dev] weak block thoughts...”. https://lists.linuxfoundation.org/pipermail/bitcoin-dev/2015-Septembe011198.html, 2016. Accessed: 2017-05-10.
[30] S. Micali. Algorand: The efficient and democratic ledger. http://arxiv.org/abs/1607.01341, 2016. Accessed: 2017-02-09.
[31] S. Nakamoto. Bitcoin: A peer-to-peer electronic cash system. https://bitcoin.org/bitcoin.pdf, Dec 2008. Accessed: 2015-07-01.
[32] Namecoin community. Namecoin reference implementation. https://github.com/namecoin/namecoin. Accessed: 2017-05-10.
[33] Narayanan, Arvind and Bonneau, Joseph and Felten, Edward and Miller, Andrew and Goldfeder, Steven. Bitcoin and cryptocurrency technologies. https://d28rh4a8wq0iu5.cloudfront.net/bitcointech/readings/princeton bitcoin book.pdf?a=1, 2016. Accessed: 2016-03-29.
[34] K. Nayak, S. Kumar, A. Miller, and E. Shi. Stubborn mining: Generalizing selfish mining and combining with an eclipse attack. In 1st IEEE European Symposium on Security and Privacy, 2016. IEEE, 2016.
[35] K. J. O’Dwyer and D. Malone. Bitcoin mining and its energy footprint. 2014.
[36] R. Pass and E. Shi. Fruitchains: A fair blockchain. http://eprint.iacr.org/2016/916.pdf, 2016. Accessed: 2016-11-08.
[37] C. Perez-Sol ´ a, S. Delgado-Segura, G. Navarro-Arribas, and J. Herrera- ` Joancomart´ı. Double-spending prevention for bitcoin zero-confirmation transactions. http://eprint.iacr.org/2017/394, 2017. Accessed: 2017-06-
[38] Pseudonymous(”TierNolan”). Decoupling transactions and pow. https://bitcointalk.org/index.php?topic=179598.0, 2013. Accessed: 2017-05-10.
[39] P. R. Rizun. Subchains: A technique to scale bitcoin and improve the user experience. Ledger, 1:38–52, 2016.
[40] K. Rosenbaum. Weak blocks - the good and the bad. http://popeller.io/ index.php/2016/01/19/weak-blocks-the-good-and-the-bad/, 2016. Accessed: 2017-05-10.
[41] K. Rosenbaum and R. Russell. Iblt and weak block propagation performance. Scaling Bitcoin Hong Kong (6 December 2015), 2015.
[42] M. Rosenfeld. Analysis of hashrate-based double spending. http://arxiv.org/abs/1402.2009, 2014. Accessed: 2016-03-09.
[43] R. Russel. Weak block simulator for bitcoin. https://github.com/rustyrussell/weak-blocks, 2014. Accessed: 2017-05-10.
[44] A. Sapirshtein, Y. Sompolinsky, and A. Zohar. Optimal selfish mining strategies in bitcoin. http://arxiv.org/pdf/1507.06183.pdf, 2015. Accessed: 2016-08-22.
[45] E. B. Sasson, A. Chiesa, C. Garman, M. Green, I. Miers, E. Tromer, and M. Virza. Zerocash: Decentralized anonymous payments from bitcoin. In Security and Privacy (SP), 2014 IEEE Symposium on, pages 459–474. IEEE, 2014.
[46] Satoshi Nakamoto. Comment in ”bitdns and generalizing bitcoin” bitcointalk thread. https://bitcointalk.org/index.php?topic=1790.msg28696#msg28696. Accessed: 2017-06-05.
[47] Y. Sompolinsky, Y. Lewenberg, and A. Zohar. Spectre: A fast and scalable cryptocurrency protocol. Cryptology ePrint Archive, Report 2016/1159, 2016. Accessed: 2017-02-20.
[48] Y. Sompolinsky and A. Zohar. Secure high-rate transaction processing in bitcoin. In Financial Cryptography and Data Security, pages 507–527. Springer, 2015.
[49] Suhas Daftuar. Bitcoin merge commit: ”mining: Select transactions using feerate-with-ancestors”. https://github.com/bitcoin/bitcoin/pull/7600. Accessed: 2017-05-10.
[50] M. B. Taylor. Bitcoin and the age of bespoke silicon. In Proceedings of the 2013 International Conference on Compilers, Architectures and Synthesis for Embedded Systems, page 16. IEEE Press, 2013.
[51] F. Tschorsch and B. Scheuermann. Bitcoin and beyond: A technical survey on decentralized digital currencies. In IEEE Communications Surveys Tutorials, volume PP, pages 1–1, 2016.
[52] P. J. Van Laarhoven and E. H. Aarts. Simulated annealing. In Simulated annealing: Theory and applications, pages 7–15. Springer, 1987.
[53] A. Zamyatin, N. Stifter, A. Judmayer, P. Schindler, E. Weippl, and W. J. Knottebelt. (Short Paper) A Wild Velvet Fork Appears! Inclusive Blockchain Protocol Changes in Practice. In 5th Workshop on Bitcoin and Blockchain Research, Financial Cryptography and Data Security 18 (FC). Springer, 2018.
[54] F. Zhang, I. Eyal, R. Escriva, A. Juels, and R. Renesse. Rem: Resourceefficient mining for blockchains. http://eprint.iacr.org/2017/179, 2017. Accessed: 2017-03-24.
submitted by dj-gutz to myrXiv [link] [comments]

The missing explanation of Proof of Stake Version 3 - Article by earlz.net

The missing explanation of Proof of Stake Version 3

In every cryptocurrency there must be some consensus mechanism which keeps the entire distributed network in sync. When Bitcoin first came out, it introduced the Proof of Work (PoW) system. PoW is done by cryptographically hashing a piece of data (the block header) over and over. Because of how one-way hashing works. One tiny change in the data can cause an extremely different hash to come of it. Participants in the network determine if the PoW is valid complete by judging if the final hash meets a certain condition, called difficulty. The difficulty is an ever changing "target" which the hash must meet or exceed. Whenever the network is creating more blocks than scheduled, this target is changed automatically by the network so that the target becomes more and more difficult to meet. And thus, requires more and more computing power to find a hash that matches the target within the target time of 10 minutes.

Definitions

Some basic definitions might be unfamiliar to some people not familiar with the blockchain code, these are:

Proof of Work and Blockchain Consensus Systems

Proof of Work is a proven consensus mechanism that has made Bitcoin secure and trustworthy for 8 years now. However, it is not without it's fair share of problems. PoW's major drawbacks are:
  1. PoW wastes a lot of electricity, harming the environment.
  2. PoW benefits greatly from economies of scale, so it tends to benefit big players the most, rather than small participants in the network.
  3. PoW provides no incentive to use or keep the tokens.
  4. PoW has some centralization risks, because it tends to encourage miners to participate in the biggest mining pool (a group of miners who share the block reward), thus the biggest mining pool operator holds a lot of control over the network.
Proof of Stake was invented to solve many of these problems by allowing participants to create and mine new blocks (and thus also get a block reward), simply by holding onto coins in their wallet and allowing their wallet to do automatic "staking". Proof Of Stake was originally invented by Sunny King and implemented in Peercoin. It has since been improved and adapted by many other people. This includes "Proof of Stake Version 2" by Pavel Vasin, "Proof of Stake Velocity" by Larry Ren, and most recently CASPER by Vlad Zamfir, as well as countless other experiments and lesser known projects.
For Qtum we have decided to build upon "Proof of Stake Version 3", an improvement over version 2 that was also made by Pavel Vasin and implemented in the Blackcoin project. This version of PoS as implemented in Blackcoin is what we will be describing here. Some minor details of it has been modified in Qtum, but the core consensus model is identical.
For many community members and developers alike, proof of stake is a difficult topic, because there has been very little written on how it manages to accomplish keeping the network safe using only proof of ownership of tokens on the network. This blog post will go into fine detail about Proof of Stake Version 3 and how it's blocks are created, validated, and ultimately how a pure Proof of Stake blockchain is possible to secure. This will assume some technical knowledge, but I will try to explain things so that most of the knowledge can be gathered from context. You should at least be familiar with the concept of the UTXO-based blockchain.
Before we talk about PoS, it helps to understand how the much simpler PoW consensus mechanism works. It's mining process can be described in just a few lines of pseudo-code:
while(blockhash > difficulty) { block.nonce = block.nonce + 1 blockhash = sha256(sha256(block)) } 
A hash is a cryptographic algorithm which takes an arbritrary amount of input data, does a lot of processing of it, and outputs a fixed-size "digest" of that data. It is impossible to figure out the input data with just the digest. So, PoW tends to function like a lottery, where you find out if you won by creating the hash and checking it against the target, and you create another ticket by changing some piece of data in the block. In Bitcoin's case, nonce is used for this, as well as some other fields (usually called "extraNonce"). Once a blockhash is found which is less than the difficulty target, the block is valid, and can be broadcast to the rest of the distributed network. Miners will then see it and start building the next block on top of this block.

Proof of Stake's Protocol Structures and Rules

Now enter Proof of Stake. We have these goals for PoS:
  1. Impossible to counterfeit a block
  2. Big players do not get disproportionally bigger rewards
  3. More computing power is not useful for creating blocks
  4. No one member of the network can control the entire blockchain
The core concept of PoS is very similar to PoW, a lottery. However, the big difference is that it is not possible to "get more tickets" to the lottery by simply changing some data in the block. Instead of the "block hash" being the lottery ticket to judge against a target, PoS invents the notion of a "kernel hash".
The kernel hash is composed of several pieces of data that are not readily modifiable in the current block. And so, because the miners do not have an easy way to modify the kernel hash, they can not simply iterate through a large amount of hashes like in PoW.
Proof of Stake blocks add many additional consensus rules in order to realize it's goals. First, unlike in PoW, the coinbase transaction (the first transaction in the block) must be empty and reward 0 tokens. Instead, to reward stakers, there is a special "stake transaction" which must be the 2nd transaction in the block. A stake transaction is defined as any transaction that:
  1. Has at least 1 valid vin
  2. It's first vout must be an empty script
  3. It's second vout must not be empty
Furthermore, staking transactions must abide by these rules to be valid in a block:
  1. The second vout must be either a pubkey (not pubkeyhash!) script, or an OP_RETURN script that is unspendable (data-only) but stores data for a public key
  2. The timestamp in the transaction must be equal to the block timestamp
  3. the total output value of a stake transaction must be less than or equal to the total inputs plus the PoS block reward plus the block's total transaction fees. output <= (input + block_reward + tx_fees)
  4. The first spent vin's output must be confirmed by at least 500 blocks (in otherwords, the coins being spent must be at least 500 blocks old)
  5. Though more vins can used and spent in a staking transaction, the first vin is the only one used for consensus parameters.
These rules ensure that the stake transaction is easy to identify, and ensures that it gives enough info to the blockchain to validate the block. The empty vout method is not the only way staking transactions could have been identified, but this was the original design from Sunny King and has worked well enough.
Now that we understand what a staking transaction is, and what rules they must abide by, the next piece is to cover the rules for PoS blocks:
There are a lot of details here that we'll cover in a bit. The most important part that really makes PoS effective lies in the "kernel hash". The kernel hash is used similar to PoW (if hash meets difficulty, then block is valid). However, the kernel hash is not directly modifiable in the context of the current block. We will first cover exactly what goes into these structures and mechanisms, and later explain why this design is exactly this way, and what unexpected consequences can come from minor changes to it.

The Proof of Stake Kernel Hash

The kernel hash specifically consists of the following exact pieces of data (in order):
The stake modifier of a block is a hash of exactly:
The only way to change the current kernel hash (in order to mine a block), is thus to either change your "prevout", or to change the current block time.
A single wallet typically contains many UTXOs. The balance of the wallet is basically the total amount of all the UTXOs that can be spent by the wallet. This is of course the same in a PoS wallet. This is important though, because any output can be used for staking. One of these outputs are what can become the "prevout" in a staking transaction to form a valid PoS block.
Finally, there is one more aspect that is changed in the mining process of a PoS block. The difficulty is weighted against the number of coins in the staking transaction. The PoS difficulty ends up being twice as easy to achieve when staking 2 coins, compared to staking just 1 coin. If this were not the case, then it would encourage creating many tiny UTXOs for staking, which would bloat the size of the blockchain and ultimately cause the entire network to require more resources to maintain, as well as potentially compromise the blockchain's overall security.
So, if we were to show some pseudo-code for finding a valid kernel hash now, it would look like:
while(true){ foreach(utxo in wallet){ blockTime = currentTime - currentTime % 16 posDifficulty = difficulty * utxo.value hash = hash(previousStakeModifier << utxo.time << utxo.hash << utxo.n << blockTime) if(hash < posDifficulty){ done } } wait 16s -- wait 16 seconds, until the block time can be changed } 
This code isn't so easy to understand as our PoW example, so I'll attempt to explain it in plain english:
Do the following over and over for infinity: Calculate the blockTime to be the current time minus itself modulus 16 (modulus is like dividing by 16, but then only instead of taking the result, taking the remainder) Calculate the posDifficulty as the network difficulty, multiplied by the number of coins held by the UTXO. Cycle through each UTXO in the wallet. With each UTXO, calculate a SHA256 hash using the previous block's stake modifier, as well as some data from the the UTXO, and finally the blockTime. Compare this hash to the posDifficulty. If the hash is less than the posDifficulty, then the kernel hash is valid and you can create a new block. After going through all UTXOs, if no hash produced is less than the posDifficulty, then wait 16 seconds and do it all over again. 
Now that we have found a valid kernel hash using one of the UTXOs we can spend, we can create a staking transaction. This staking transaction will have 1 vin, which spends the UTXO we found that has a valid kernel hash. It will have (at least) 2 vouts. The first vout will be empty, identifying to the blockchain that it is a staking transaction. The second vout will either contain an OP_RETURN data transaction that contains a single public key, or it will contain a pay-to-pubkey script. The latter is usually used for simplicity, but using a data transaction for this allows for some advanced use cases (such as a separate block signing machine) without needlessly cluttering the UTXO set.
Finally, any transactions from the mempool are added to the block. The only thing left to do now is to create a signature, proving that we have approved the otherwise valid PoS block. The signature must use the public key that is encoded (either as pay-pubkey script, or as a data OP_RETURN script) in the second vout of the staking transaction. The actual data signed in the block hash. After the signature is applied, the block can be broadcast to the network. Nodes in the network will then validate the block and if it finds it valid and there is no better blockchain then it will accept it into it's own blockchain and broadcast the block to all the nodes it has connection to.
Now we have a fully functional and secure PoSv3 blockchain. PoSv3 is what we determined to be most resistant to attack while maintaining a pure decentralized consensus system (ie, without master nodes or currators). To understand why we approached this conclusion however, we must understand it's history.

PoSv3's History

Proof of Stake has a fairly long history. I won't cover every detail, but cover broadly what was changed between each version to arrive at PoSv3 for historical purposes:
PoSv1 - This version is implemented in Peercoin. It relied heavily on the notion of "coin age", or how long a UTXO has not been spent on the blockchain. It's implementation would basically make it so that the higher the coin age, the more the difficulty is reduced. This had the bad side-effect however of encouraging people to only open their wallet every month or longer for staking. Assuming the coins were all relatively old, they would almost instantaneously produce new staking blocks. This however makes double-spend attacks extremely easy to execute. Peercoin itself is not affected by this because it is a hybrid PoW and PoS blockchain, so the PoW blocks mitigated this effect.
PoSv2 - This version removes coin age completely from consensus, as well as using a completely different stake modifier mechanism from v1. The number of changes are too numerous to list here. All of this was done to remove coin age from consensus and make it a safe consensus mechanism without requiring a PoW/PoS hybrid blockchain to mitigate various attacks.
PoSv3 - PoSv3 is really more of an incremental improvement over PoSv2. In PoSv2 the stake modifier also included the previous block time. This was removed to prevent a "short-range" attack where it was possible to iteratively mine an alternative blockchain by iterating through previous block times. PoSv2 used block and transaction times to determine the age of a UTXO; this is not the same as coin age, but rather is the "minimum confirmations required" before a UTXO can be used for staking. This was changed to a much simpler mechanism where the age of a UTXO is determined by it's depth in the blockchain. This thus doesn't incentivize inaccurate timestamps to be used on the blockchain, and is also more immune to "timewarp" attacks. PoSv3 also added support for OP_RETURN coinstake transactions which allows for a vout to contain the public key for signing the block without requiring a full pay-to-pubkey script.

References:

  1. https://peercoin.net/assets/papepeercoin-paper.pdf
  2. https://blackcoin.co/blackcoin-pos-protocol-v2-whitepaper.pdf
  3. https://www.reddcoin.com/papers/PoSV.pdf
  4. https://blog.ethereum.org/2015/08/01/introducing-casper-friendly-ghost/
  5. https://github.com/JohnDolittle/blackcoin-old/blob/mastesrc/kernel.h#L11
  6. https://github.com/JohnDolittle/blackcoin-old/blob/mastesrc/main.cpp#L2032
  7. https://github.com/JohnDolittle/blackcoin-old/blob/mastesrc/main.h#L279
  8. http://earlz.net/view/2017/07/27/1820/what-is-a-utxo-and-how-does-it
  9. https://en.bitcoin.it/wiki/Script#Obsolete_pay-to-pubkey_transaction
  10. https://en.bitcoin.it/wiki/Script#Standard_Transaction_to_Bitcoin_address_.28pay-to-pubkey-hash.29
  11. https://en.bitcoin.it/wiki/Script#Provably_Unspendable.2FPrunable_Outputs
Article by earlz.net
http://earlz.net/view/2017/07/27/1904/the-missing-explanation-of-proof-of-stake-version
submitted by B3TeC to Moin [link] [comments]

How to Bet with Bitcoins -- Play Bitcoin Roulette Mines Autobet Martingale Strategy  Stake Bitcoin Casino BitcoinRush.io - Provably Fair Gambling CoinDragon New Casino With Multiple Games !! Best Provably Fair Games !! Dice, X-Factor, Mines... CryptoFairPlay.com - The Best Bitcoin Casino - Provably Fair

limit my search to r/Bitcoin. use the following search parameters to narrow your results: subreddit:subreddit find submissions in "subreddit" author:username find submissions by "username" site:example.com find submissions from "example.com" url:text search for "text" in url selftext:text search for "text" in self post contents self:yes (or self:no) include (or exclude) self posts nsfw:yes (or ... Which of the Bitcoin-powered gambling websites are provably fair? For clarity's sake, a provably fair game is a game that uses cryptographic algorithms to ensure their players that the game host or other players did not temper with the outcome of the game after it has begun. From Bitcoin Wiki. Jump to: navigation, search. Provable fairness refers to the technology used by bitcoin casino and gambling sites to verify the outcomes of their games. This allows players to confirm that the outcomes were true and fair without any adjustments or manipulations by the gambling sites between when the bet was placed and when the outcome was revealed. Process. The basis of the ... BetChain-Provably Fair Online Bitcoin Casino. Started by BetChain. Replies: 1 Views: 12017 June 29, 2015, 08:46:23 AM by BetChain: PLAYTIN - HTML5 Bitcoin Casino - Provably Fair - SPECIAL: 99.8% Video Poker. Started by playtin. Replies: 0 Views: 15162 March 07, 2013, 02:48:56 AM by playtin BitStarz is a provably fair gaming site with the use of a cryptographic data encryption method that ensures that game results cannot be tampered. Players have the ability to instantly check all game results. Advantages . More than 850 games. Super-fast payments (average processing time of payments is not more than 10 minutes). Online support 24/7 (support staff has experience of 3 years ...

[index] [50640] [41179] [34466] [37760] [49785] [46674] [27675] [43234] [37649] [43515]

How to Bet with Bitcoins -- Play Bitcoin Roulette

CryptoFairPlay.com - The Best Bitcoin Casino - Provably Fair - Duration: 0:59. FairPlay Matrix 234 views. 0:59. Truco para MULTIPLICAR tus LITECOINS ESTRATEGIA 2020 MULTIPLY LTC 99.9% seguro ... Bitcoin Roulette is a classic casino style Roulette game that allows you to wager with Bitcoins. It is an easy and fun way to win more Bitcoins! 100% Verifiable and Cheat Proof, a provably fair ... Demonstration of provably fair technology on bitcoin gambling website https://www.crypto-games.net Bitcoin: What Bill Gates, Buffett, Elon Musk & Richard Branson has to say about Bitcoin? ... Provably fair online gambling with zero fees and zero limitations - Duration: 5:11. Jack Mallers 700 ... Provably Fair on Bitcoin Casino Bitsler com - Duration: 3:33. Bitcoin Earn 270 views. 3:33. GOING CRAZY BETTING $50 DOLLARS A SPIN ON A SLOT MACHINE!!! - Duration: 12:20. ...

#