We use cookies to provide the best site experience
Okay

Cryptoanalysis

Economist, scientific Director of the graduate FINTECH, Director of research Thalamus Lab, RANEPA
Chapter 4. Cryptography and security for blockchains and DLT
Alexander Didenko
The main goal of this longread is to get the idea of how security is analyzed (or might be analyzed) in blockchains.

Basic notions

Cryptanalysis, according to Wikipedia, is "the study of analyzing information systems in order to study the hidden aspects of the systems", i.e. "Given some encrypted data ("ciphertext"), ... to gain as much information as possible about the original, unencrypted data ("plaintext")." Security engineering, from the other hand, is "a specialized field of engineering that focuses on security aspects in the design of systems that need to be able to deal robustly with possible sources of disruption, ranging from natural disasters to malicious acts."
Usually for both contexts the system (say, cryptographic protocol) is modeled as interaction of model of adversarial or attacker (or multiple attackers), and defender model, in the presence of some security goal. Say, defender can defend — preserve both security and functionality — of some asset (server in the network or protected specie in national park), to which an attacker wants to have an access/exposure. Or defender could cipher some plaintext, and security goal of an attacker is to extract some information from this ciphertext, say, distinguish two ciphertexts.
Every side has its spectrum of possible actions, and limited resources (for example, money or computational power to cope with computational complexity). Questions that are usually asked are: how to make security system in which attack on security goal would be infeasible in some reasonable time (say, polynomial) or profitless; how to understand that attack is in fact feasible in some reasonable time; or how to assess attack detection probabilities, etc. Attacks could have very different form, depending upon security goal which is context-specific: fare evasion; installation of malicious software on defender's server; botnet, ransomware; interception of information; jamming transmitter; wildlife poaching.

Game-theoretic security

Earlier, we examined strategic interactions and their equilibrium outcomes, assuming that participants use different solution concepts. An important application of the games theory in cryptocurrencies is the design of mechanisms that allow achieving an important social outcome in equilibrium, provided that all agents (for example, miners and/or users) maximize their utility.
In recent years, there has been a growing interest in the use of game-theoretic framework in the other area important for cryptocurrencies: in the field of security. Security games are strategic interactions between the various types of agents participating in the security protocol. The two main types of agent are malicious attacker and defender; as well as the rest — "honest" participants in communication protocols, file and mail servers, smart grids operators, as well as intrusion detecting systems, "honeypots", spam classifiers, Industrial Control Systems (ICS), etc. — are modeled as strategic entities, with all attributes of players in traditional game theory (utility functions, best response strategies, etc.)
This environment can be complemented with "non-strategic" entities, such as spammers in game between attacker and defender, see Dritsoula et al. (2012). Good reviews are in Manshaei et al. (2013) and Liang & Yan (2019).
Game-theoretic approach to security analysis helps to:
Understand what are the equilibrium outcomes of such interactions under certain assumptions about agents, their strategies, and protocol properties
Thus, helps evaluate security vulnerability, assess risk of breaching of security or functionality goals
Outline possible trade-offs in protocol design that affect equilibrium outcomes or Pareto optimality, and identify the optimal operating point on Pareto optimal frontiers
Such as "false positives" vs. "false negatives" — i.e., false intrusion detection signals vs. missed attacks; or vulnerability/cost trade-off
Understand how various protocol parameters affect equilibrium
Analyze bounds for utility functions of attackers and defenders
For example, if we manage to prove that under certain assumptions adversary's utility function upper boundary is negative — we effectively proved that it is unprofitable for an adversary to attack
Get best response strategies
For example, defenders against attackers
Understand how parameters of environment affect equilibrium
For example, many security games are modeled as games on networks; in such case, graphs of games help to answer the question regarding the graph topology — the number of links in average node, centrality, etc. — affects the outcome of the game
Colonel Blotto game
Few weeks ago we discussed generals games, now it's time to move on to colonels. Colonel Blotto, or divide-a-dollar, game was developed by the French statistician and politician Emile Borel in 1921. His original idea is very similar to "Risk" board game (which could be played online here or here, but to get the idea of Blotto game it's not mandatory).
There is also a version of this game where general is participating as well, namely, general Tian Ji's Horse Racing strategy.
In Borel's game, two players, Colonel Blotto and General Lotto, each constrained by "budget" of troops, have to divide their armies between finite N of battlefields or hills. In some versions of the game budgets of Blotto and Lotto are asymmetric, with Blotto having more troops. Decisions are taken simultaneously, so the colonels are not informed of the actions of counterpart. Battle outcome on each battlefield is then decided by comparing number of troops: whoever sends more troops, wins the battle. In Tian Ji's version instead of allocating troops to battlefield, general-players — Tian Ji and Emperor Qi — are assigning horses of varying horsepower to races; whoever sends more powerful horse, wins particular race. Then whoever wins the most number of battles or races in N, wins the war, or races. General Tian Ji manages to win races in situation when his "budget" is asymmetric: his best horse is worse than Emperor's best one, but better than Emperor's second one; and the same is relevant to his second and third horses, which are subeffective as compared to second and third horses, but his second one is better than Emperor's third one. Unlike Blotto and Lotto, which are making decisions simultaneously, Tian Ji has the advantage over the second player. Being informed of Emperor's allocation he sends his worst horse against the best one of the Emperor, and loses one race; but at price of this sacrifice he kicks off Emperor during two other races, sending his first horse against Emperor's second one, and his second one against Emperor's third one.
In principle, similar outcome is achievable in Blotto game as well, but this game has no pure Nash equilibrium, and researchers had difficulties finding general analytic solutions to it. Seventeen years after the introduction of initial game setup, Borel & Ville (1938) developed solution for 3-battlefields/symmetric budgets version of the game; then in Gross & Wagner (1950) solutions for (1) N-battlefields/symmetric budgets, and (2) 2-battlefields/asymmetric budgets was proposed; finally, only much-cited Robertson (2006) developed solution for N-battlefields/asymmetric budgets setup, which kind of issued "hunting license" on Blotto-game-papers within number of domains: military confliсts; sport, economic and political contests; and of course, computer science (network resource allocation) and security (specifically, cybersecurity) domains. For example, general network security setting is analysed in Park & Kim (2015) and Guan et al. (2019); scenario with network center and jammers were explored in different contexts in Wu et al. (2009) and Raheem & Abdellatif (2017) — general wireless connectivity; Kim (2015), Labib et al. (2015) and Namvar et al. (2016) — IoT; Ferdowsi et al. (2017) — smart cities; Heyman & Gupta (2018) — defenders coalition formation.
Blotto game fits setting of generic network security game. A stage Attack/Defense game is a static game played on some node in a network. Attacker chooses between {Attack/Not attack} actions; defender chooses to {Defend/Not defend}. A Network Security game is a game in which the attacker and the defender play many independent Attack/Defense games on each node of the network.
Important extension of original Borel's setup was introduced in Osorio (2013). This setup resembles Risk board game even further: now, outcome of the battle is probabilistic and depends not only on the ratio of troops sent on the hill. Another two extensions, directly relevant to the context of cybersecurity, are Schwartz et al. (2014), which introduces battlefields heterogeneity (varying weights of battlefields for determination of victory in war) and alliances between players in an N-person Blotto games; and Gupta et al. (2014), which allows players create new battlefields
Stackelberg security games
In 1934 Heinrich Freiherr von Stackelberg, Moscow-born German economist, published a book Marktform und Gleichgewicht (Market Structure and Equilibrium), in which, apart from other things, he described a game between two firms in a duopoly, which later became known as Stackelberg competition or Stackelberg leader-follower model, or Stackelberg leadership model. In this game a move is a decision on quantity of production of some good, and players are making moves side by side (see the figure below).
Apparently, the idea of leader-follower hierarchical interaction appeared to be extremely relevant for security context, where the defender commits to an action that is subsequently observable by an adversary, and can be exploited by him. The first player has his own advantage of the first move, and hence — of predetermining options for the move of the second player. Thus, it is important for the first player to predict correctly potential responses of the second player and choose best first-mover strategy. In this regard, there are two concepts of an adversary in game-theoretic security domain: first, when an adversary is a software, it is modeled as rational agent; second, when an adversary is a human, rationality assumption weakens. Rational first-mover anticipation of perfectly rational second-mover implies that the latter would always select only one particular action, which maximises his utility, and hence, finding the best first move is straightforward. But if the agent is not perfectly rational, he may select one action out of some range of possible actions, which makes this solution concept non-robust for defender. In this case, randomization (e.g., see Ridinger et al. (2016)) might be a rational solution for the defender. Extensive review of Stackelberg security games could be found in Sinha et al. (2018). The concept of Stackelberg security game was further generalized by Blocki et al. (2013) to Stackelberg audit games, which are Stackelberg security games in which Defender — or Auditor — can punish Attacker with some configurable parameter.
Stackelberg competition

Graphic sourse
FLIPIT, abstract cybersecurity game
While Stackelberg and Blotto games came to game-theoretic security from classic game theory, aboriginal contributions to the field also could be found. Meet The Game of "Stealthy Takeover", or FLIPIT, which was proposed in van Dijk et al. (2013):
"In FLIPIT, players compete to control a shared resource. Unlike most existing games, FLIPIT allows players to move at any given time, taking control of the resource. The identity of the player controlling the resource, however, is not revealed until a player actually moves. To move, a player pays a certain move cost. The objective of each player is to control the resource a large fraction of time, while minimizing his total move cost. FLIPIT provides a simple and elegant framework in which we can formally reason about the interaction between attackers and defenders in practical scenarios"
While original proposal of the first paper is application of FLIPIT to modeling of targeted attacks, the authors indicate that the game might be of interest for other domains in security, such as cryptographic key rotation, password changing policies, refreshing virtual machines, and cloud auditing. Currently there are 170+ papers mentioning FlipIt in SemanticScholar, where various application of this continuous-time game and its descendants are discussed. Daian et al. (2019) discusses front-running attack in decentralized exchanges on blockchain using concept, similar to FlipIt.
FLIPIT gave rise to the number of papers, describing, in fact, the new class of security games. Other special classes of security games are introduced to capture special features of the field, such as Collusive Security Games (COSG), in which colluding adversaries are modeled, and solutions for a defender developed to counter-collude (or, narrowly, inhibit collusion of) adversaries.
Bayesian machine games
As was mentioned earlier, there are several concepts of agents rationality in game-theoretic security. Best responses, equilibriums, robustness of defencer's strategies, etc. could significantly depend on this issue. Under full rationality assumption, attacker is well-informed and has clear utility function, which he maximises. Bounded rationality concept implies that agents are limited in computational resources and information and, thus, are not maximizing per se, but instead — finding best satisfying solution; its application to security games domain is explored, e.g., in Sanjab & Saad (2016).
Bounded rationality concept is related to the idea of computational games, in which agents are charged for computation. Bayesian rationality (see e.g., Mariotti (1995) or Frahm (2018) for general introduction) implies that players do not possess full information about states of environment (called "Nature" in classic applications) and instead use subjective prior probabilities (beliefs), mixing it with information, that is available to them. Hence, Bayesian games are games where players neither have complete information, nor are completely non-informed; instead they have subjective probabilities. Video introduction to general Bayesian games and Bayesian Nash equilibrium could be found here. Introduction of Bayesian solution concept to computer systems led to the concept of Bayesian machine game (BMG) — see e.g., Halpern & Pass (2008) and Halpern & Pass (2013), in which players are charged for computation resources, storage costs, etc. (the idea is somewhat similar to bounded rationality concept), and players strategically pick machine for their actions execution (e.g., a Turing machine or interactive Turing machine; learn more about Turing machines here and here). Complexity function for the machine is defined and included in player's utility functions. Bayesian machine games (and generally — game-theoretic approaches to security in blockchains) are discussed in this podcast (rus).
Stochastic games
In "Evolution of trust" we see how stage game — basic game of trust — turns to repeated game, and further turns to evolutionary game, where we have varying breeds of agents with their own strategies each. Many of these games are too complex to find equilibrium using analytical tools. Remember Sandbox mode in "Evolution of trust": when we have multi-party, repeated interactions, with heterogeneous — in terms of their strategies — players, it takes many simulations to get total evolutionary dynamics of the system. Analytically — through operations with formulas and symbols — it is impossible to understand on what equilibrium will the game converge.
In such cases, agent-based modeling is used, which will be discussed in more detail in one of the following longreads. This might help to determine, for example, attack detection probabilities and see sensitive is the system in this parameter in response to other parameters change.
Another generalization of repeated game is stochastic game representation, which combines idea of Markov chain and repeated game. You are already familiar with Markov chain idea: discrete-time Markov chain on a finite state space is finite state space machine. Based on that, we can formulate Markov game as repeated game in which player's payout functions depend upon current state of the system. In stochastic game payouts of players depend upon system current state AND other player's actions. Markov security games is an application of this concept to security domain. Law et al. (2012) use Alpcan-Basar's stochastic game framework to define the following system with several states of varying level of riskiness. After attacker and defender have chosen their actions, system is transitioning in one of its states with some probability. For every triplet [current state]-[attackers action]-[devender's responce] a probability is defined for the system to transit to some other state or to remain in current one.
Rational cryptography
In addition to pure security games, there are other applications of game theory to security tasks. The most important question here is what incentives should be applied to human participants of security protocols to enforce honesty, or simply compliance with elementary rules of secrecy. Dodis et al. (2000) introduced the idea that incentives could be incorporated in cryptographic protocols. In this context, the way how to account for incentives, how adversary and his capabilities are defined, as well as what solution concepts could be used to prove protocols security for such adversaries, have to be modified.

Provable cryptographic security

Provable security, according to Wikipedia, "refers to any type or level of security that can be proved". In this sense, the game-theoretic models that we examined above could be regarded as some kind of provable security (at least in that part in which a real-life system can be described as a security game). But game-theoretic models do not always (actually, very rarely) deal directly with cryptographic security. There are two important points that should not be confused.
Cryptographic proofs of security could contribute to systematic security engineering process twofold: first, by direct application of various methods to specific domains — to get the idea, refer to beginning of the chapter where we list the domains. On the other hand, it can contribute to provable security of cryptographic protocols themselves.
In more strict sense, the goal of the provable security is to prove "that a protocol P is secure with respect to a computational problem or primitive S" (Menezes, 2012). To do that, one need to define:
Security requirements per se
Assumptions about adversary, his capabilities and goals
Assumptions about features of computational problem S
Then "reductionist security proof" would follow, which formally proofs that given (1), (2) and (3), Adversary would not break P. The problems connected with this formal approach to cryptographic protocol security are discussed here and here (video-version is here).
BAN logic
Burrows–Abadi–Needham logic, or BAN logic, is, according to Wikipedia, "a set of rules for defining and analyzing information exchange protocols. Specifically, BAN logic helps its users determine whether exchanged information is trustworthy, secured against eavesdropping, or both. BAN logic starts with the assumption that all information exchanges happen on media vulnerable to tampering and public monitoring.
This has evolved into the popular security mantra: "Don't trust the network." It was introduced in seminal paper "A Logic of Authentication" by Michael Burrows, Martín Abadi and Roger Needham. Wide-mouth frog protocol is a "toy" example of communication protocol, that was introduced in Burrows et al. (1990).

Cryptoeconomic security

Blockchain security model
Good high-order review of security models in blockchains is here. Probably, one of the well-known pieces on this topic is Sompolinsky & Zohar (2016). Here is the talk by Aviv Zohar on security of Bitcoin's communication infrastructure in Hebrew University. Types of attacks in cryptoeconomic systems, according to Nick Tomaino, are:
Cryptography attacks
Consensus attacks
Example of such attack is triangle of harm; another example of consensus attack is miners front-running — read about it here and here. Another important issue — nodes, responsible for consensus in blockchains, receive payment in the form of fees and inflation; important issue here is how much a system should pay to such nodes, "how much "defense spending" is required for a blockchain to be secure, and given a particular amount of spending required, which is the best way to get it?" And last but not the least is finality, another important feature, which we have discussed earlier.
Social engineering attacks
Additional materials
Made on
Tilda