Jekyll2023-08-13T09:24:23-07:00https://akhilsb.github.io/feed.xmlAkhil S. BandarupalliPh.D. student in Computer Science at Purdue UniversityAkhil S. Bandarupalliabandaru@purdue.eduEfficient O(n^2) Byzantine Fault-Tolerant Asynchronous Approximate Agreement2023-03-30T00:00:00-07:002023-03-30T00:00:00-07:00https://akhilsb.github.io/posts/2023/3/bp-3<p>In this post, I will describe a new and efficient Asynchronous Approximate Agreement($\epsilon$-agreement) protocol with only $\mathcal{O}(n^2)$ bits of communication per round. Our protocol requires honest nodes to only have binary inputs, where every honest node’s input $V_i \in {v_0,v_1}$, where both $v_0$ and $v_1$ are publicly known and $\Delta = v_1-v_0$.</p> <h1 id="background">Background</h1> <p>The Asynchronous Approximate Agreement or $\epsilon$-agreement primitive allows a set of nodes to approximately agree on a value within the convex-hull of honest nodes’ inputs. This protocol circumvents the prominent Fischer-Lynch-Patterson(FLP)-impossibility result, which states that no deterministic asynchronous protocol can achieve agreement amongst a set of nodes with at least one crashed node. Any $\epsilon$-agreement protocol must satisfy the following three properties.</p> <ol> <li><em>Termination</em>: Every honest node $i$ must eventually decide a value $v_i$.</li> <li>$\epsilon$-<em>agreement</em>: The decided values $v_i,v_j$ of any honest nodes $i,j$ must be within $\epsilon$ distance of each other. $$|v_i-v_j| &lt; \epsilon \forall i,j \in \{1,..,n\}$$</li> <li><em>Convex-Hull Validity</em>: The decided value $v_i$ of any honest node must be within the convex-hull of honest nodes’ inputs. $$\min(\mathcal{V})\leq v_i \leq \max(\mathcal{V})$$</li> </ol> <p>Abraham, Amit and Dolev’s $\epsilon$-agreement protocol achieves all the three properties with an optimal resilience of $1/3$ faults and has a communication complexity of $\mathcal{O}(\kappa n^3)$ bits. The $\kappa$ factor in communication complexity results from the use of Das, Xiang, and Ren’sReliable Broadcast protocol. If instead Bracha’s RBC was used, the communication complexity of Abraham, Amit, and Dolev’s $\epsilon$-agreement would be $\mathcal{O}(n^4)$ bits.</p> <h1 id="preliminaries">Preliminaries</h1> <p>We define the Asynchronous Crusader Agreement primitive, also referred to as 1-slot Proxcensus in Fitzi, Liu-Zhang, and Loss.</p> <p>A crusader agreement protocol $\mathcal{C}$ amongst $n$ nodes must guarantee the following properties.</p> <ol> <li><em>Agreement</em>: If two honest nodes decide values x and y, then either x=y or at least one of the values is $\perp$</li> <li><em>Liveness</em>: All honest nodes eventually decide and then eventually terminate</li> <li><em>Validity</em>: If all honest nodes have the same input x, then they must output x</li> </ol> <p>Using this primitive, I build an $\epsilon$-agreement protocol.</p> <h1 id="protocol">Protocol</h1> <p><img src="/images/binary_aa_1.png" alt="Protocol" /></p> <p>I will briefly describe the protocol in Algorithm 1. Each node $i$ broadcasts an ECHO message for its value $V_{i}$. It also broadcasts an ECHO for any other value $V’ \neq V_{i}$ for which it received $f+1$ ECHOs. For a value $v$ to receive $f+1$ ECHOs, at least one non-faulty node must have input $v$ to the protocol. Once a node receives $n-f$ ECHO messages for value $V$, it broadcasts an ECHO2 message for $V$. It waits until it receives $n-f$ ECHOs for two different values $V,V’$ to output $\perp$ or until it receives $n-f$ ECHO2 messages for a single value $V$. In the former case, it updates its state for round $r+1$ as $V_{i} = \frac{V+V’}{2}$, and in the latter case, it updates its state as $V_{i} = V$.</p> <p>If a node receives $n-f$ ECHO2s for a single value $V$, then no other sensor can receive $n-f$ ECHO2s for any other value $V’ \neq V$. Therefore, after round $1$, non-faulty sensors’ values can be characterized by the following four sets: 1. ${v_{0}}$, 2. ${v_{0},\frac{v_{0}+v_{1}}{2}}$,3. ${\frac{v_{0}+v_{1}}{2},v_{1}}$, 4. ${v_{1}}$, and all four sets have the difference between their states $\Delta_1 \leq \frac{v_{1}-v_{0}}{2}$. After $\log_{2}(\frac{v_{1}-v_{0}}{\epsilon})$ rounds, the nodes achieve $\epsilon$-agreement.</p> <p>The main intuition behind the reduced communication complexity is that we simplify the node-centric broadcast phase using $n$-parallel reliable broadcasts from Abraham, Amit, and Dolev and replace it with a value-centric broadcast phase derived from Asynchronous crusader agreement. The former phase involves every node participating in the reliable broadcast of every other node, leading to an $\mathcal{O}(n^3)$ factor in communication complexity. However, since our protocol deals with a restricted domain with only two values, the value-centric phase has an $\mathcal{O}(n)$ factor of improvement over the node-centric broadcast phase.</p> <h2 id="proof-sketch">Proof Sketch</h2> <p>We first prove that if the protocol is in a bivalent state in round $r$, it must move to a bivalent or univalent state in round $r+1$. Assuming nodes start round $r$ of the protocol with values $c_0’$ and $c_1’$. From the properties of Crusader agreement, the protocol moves into state ${c_0’,\perp}$ or ${c_1’,\perp}$. Honest nodes receiving \bottom update their value to $v = \frac{c_0’+c_1’}{2}$, which implies that the range of honest values reduces to $\frac{c_1’-c_0’}{2}$.</p> <p>In contrast, if the state after round $r$ is univalent, the nodes have already agreed, which implies that the range of honest inputs reduces by at least a factor of $\frac{1}{2}$. Therefore, after at most $\log_2(\frac{\Delta}{\epsilon})$ rounds, the protocol achieves \approxconsensus.</p> <h1 id="complexity-analysis">Complexity Analysis</h1> <p>In each round, every node broadcasts an ECHO message for at most two values, and an ECHO2 message for at most one value. Therefore, each round of $\epsilon$-agreement through this protocol costs only $\mathcal{O}(n^2)$ messages, with each message of size $\mathcal{O}(\log(\frac{\Delta}{\epsilon}))$ bits. Therefore, $\mathcal{O}(\log(\frac{\Delta}{\epsilon}))$ rounds costs $\mathcal{O}(n^2\log^2(\frac{\Delta}{\epsilon}))$ bits overall to achieve $\epsilon$-agreement.</p> <h2 id="references">References</h2> <p> Abraham, Ittai, Yonatan Amit, and Danny Dolev. “Optimal resilience asynchronous approximate agreement.” In Principles of Distributed Systems: 8th International Conference, OPODIS 2004, Grenoble, France, December 15-17, 2004, Revised Selected Papers 8, pp. 229-239. Springer Berlin Heidelberg, 2005.</p> <p> Das, Sourav, Zhuolun Xiang, and Ling Ren. “Asynchronous data dissemination and its applications.” In Proceedings of the 2021 ACM SIGSAC Conference on Computer and Communications Security, pp. 2705-2721. 2021.</p> <p> Fitzi, Matthias, Chen-Da Liu-Zhang, and Julian Loss. “A new way to achieve round-efficient Byzantine agreement.” In Proceedings of the 2021 ACM Symposium on Principles of Distributed Computing, pp. 355-362. 2021.</p>Akhil S. Bandarupalliabandaru@purdue.eduIn this post, I will describe a new and efficient Asynchronous Approximate Agreement($\epsilon$-agreement) protocol with only $\mathcal{O}(n^2)$ bits of communication per round. Our protocol requires honest nodes to only have binary inputs, where every honest node’s input $V_i \in {v_0,v_1}$, where both $v_0$ and $v_1$ are publicly known and $\Delta = v_1-v_0$.Learnings from Randomized Algorithms2020-12-24T00:00:00-08:002020-12-24T00:00:00-08:00https://akhilsb.github.io/posts/2020/12/bp-1<p>I took the course <a href="https://fundamentalalgorithms.com/randomized">Randomized algorithms</a> in Fall 2020, offered by Professor Kent Quanrud. This blog post contains an overview of topics I learnt in the course.</p> <p>Randomized Algorithms(CS590-RA) is touted to be one of the toughest courses(in terms of content) offered in Purdue’s CS department. The course needs Algorithms:Design and Analysis (CS58000) as a prerequisite and also needs expertise with mathematical machinery such as Probability and Linear Algebra. This course was the most influential course I took in a long time. The concepts I learnt in this course changed the way I think about problem solving and designing algorithms for the same. The course is divided into 2 parts: The first half is about using randomization to design Approximation algorithms for solving standard conceptual problems like Graph Min. Cuts with higher efficiency and practicality. The second half of the course was based on understanding random walks and applying them to design solutions for various problems.</p> <h1 id="learnings-from-the-course">Learnings from the course</h1> <h2 id="approximation-is-not-that-bad">Approximation is not that bad</h2> <p>The beauty of the concept of absolute zero is the fact that it is impossible to achieve. Zero is a concept that exists only in theory. For example, think about drawing 2 line segments of EQUAL length. Can we do this? The maximum precision we can achieve when measuring things is in the order of femto meters (10^(-15)), which is still infinity times greater than zero. However, the mathematical construct of zero allows us to derive many proofs and concepts in mathematics. Considering that zero is practically unachievable, don’t we undergo a lot of stress designing algorithms that solve a given problem EXACTLY?</p> <p>Consider 2 algorithms A1 and A2 which solve a fundamental Computer Science problem - say Min. Cut of a graph G. A1 solves the problem with 100% accuracy while taking time X and occupying space Y. A2 solves the problem with 99.9999% accuracy while taking time X/100 and occupying space Y/100. What approach is preferrable in terms of practicality? Is it worth spending 100 times more memory and time for the 0.0001% accuracy? The course presents us techniques which make use of randomization to make such gains.</p> <h2 id="approximation-through-randomization">Approximation through Randomization</h2> <p>Randomization techniques such as random sampling and probabilistic sampling are highly effective in achieving polynomial probabilistic bounds. Certain beautiful concepts like Universal Hash functions, HyperLogLog and Gaussian sampling based approximations were taught in the course. We moved on to a concept called LP Duality, which I think is one of the most intuitive mathematical concepts, yet extremely hard to prove. According to me, the approximation techniques based on LP Duals is one of the most challenging lectures in the course.</p> <h2 id="moving-on-to-random-walks">Moving on to Random Walks</h2> <p>The second half of the course was about random walks and their applications. Given a map and the deterministic explorative tendencies of a human, the idea of walking randomly around a graph itself sounds very absurd. However, the mathematics supporting the concept completely blew my mind away. I learnt about stationary distributions, the analog with respect to electrical networks, expander graphs and deterministic connectivity. There is a lecture which talks about the Zig-Zag product, following which I wondered about the limits of human imagination. I mean, we all do have these moments right? When we think how the hell can someone get that idea? The course has many more such joyful moments where I just said to myself that I am grateful to possess the consciousness which can perceive and understand these concepts.</p> <h2 id="conclusion">Conclusion</h2> <p>The course demands a lot of effort with 2 homeworks every week and a lot of brainstorming and head-banging, but I say that every minute of my time I spent on the course was worth it. I will probably reap the benefits of this knowledge for an indefinite amount of time. A huge shoutout to Professor Kent Quanrud for teaching such an amazing course!</p>Akhil S. Bandarupalliabandaru@purdue.eduI took the course Randomized algorithms in Fall 2020, offered by Professor Kent Quanrud. This blog post contains an overview of topics I learnt in the course.Best Fresher Award2020-12-24T00:00:00-08:002020-12-24T00:00:00-08:00https://akhilsb.github.io/posts/2020/12/bp-2<p>I received the best fresher award for fall 2020, for my work in blockchain protocols in lightweight embedded devices. I will write a blog post regarding this in the future where I will describe the network setup, the challenges and the accomplishments of this project. I heartfully thank Professor Bagchi and Professor Somali Chaterji for giving me the award.</p> <p><img src="/images/best_fresher.jpg" alt="Certificate" /></p>Akhil S. Bandarupalliabandaru@purdue.eduI received the best fresher award for fall 2020, for my work in blockchain protocols in lightweight embedded devices. I will write a blog post regarding this in the future where I will describe the network setup, the challenges and the accomplishments of this project. I heartfully thank Professor Bagchi and Professor Somali Chaterji for giving me the award.