Coordinates: KD103, 5 pm, Friday 2nd September
Read Once ABPs (RO) are polynomial computational models where the order of variable multiplication is crucial. Though highly restrictive, they are one of the few models for which algebraic lower-bounds results have been known for a while. This talk will discuss the lower bounds of a natural generalisation called the sum of RO.
We will explore a well-known technique of proving algebraic lower bounds called the Partial Derivative method, which could be a topic of general interest for those interested in complexity theory.
The talk is based on the work of V Arvind and S Raja from 2015 - “Some Lower Bound Results for Set-Multilinear Arithmetic Computations”.
Coordinates: KD103, 5 pm, Friday 24th June
James Bond has infiltrated the criminal organisation Spectre. Unfortunately, to communicate with the outside world, he has no fancy equipment but just five neck ties that he can wear each day: Red, Blue, Green, Purple and Beige. Using fancy satellite imaging, the people at MI6 can see the colour of 007’s tie.
The ties have not really been washed (Bond was busy with other things) and so it is hard to tell apart some colours. One cannot distinguish Red from Purple nor from Beige. Similarly, it is not possible to reliably distinguish Beige from Red nor Green, Green from Beige nor Blue, Blue from Green nor Purple and Purple from Blue nor Red. But there is no problem distinguishing Purple from both Beige and Green for example.
What is the most efficient way for Bond to communicate reliably? Using some Linear Algebra and the help of Abel Prize winner László Lovász, we will figure it out!
Prerequisites: Some knowledge of basic Linear Algebra.
Coordinates: KD103, 5 pm, Friday 10th June
The talk basically focused on the fundamentals of Multivariate Cryptography. Multivariate Cryptography is also considered a post-quantum secure. I will detail the basic construction of RAINBOW. If time permits (or in the next talk), I will present the recent attacks on RAINBOW by Ward Beullens.
References:
Coordinates: KD103, 5 pm, Friday 6th May
Given a sparse multivariate polynomial with s-many monomials, the problem of sparse polynomial factorization asks for an efficient algorithm to factor the given polynomial. We discuss state of the art for this problem. In particular, we will discuss the connection of this problem with a geometrical/combinatorial object called Newton polytopes. We shall mainly discuss my recent joint work with Prof. Nitin Saxena, which shows a poly-time deterministic algorithm for this problem, when the input is a symmetric polynomial.
Coordinates: KD103, 5 pm, Friday 29th April
Given $(a_1, \dots, a_n, t) \in Z_{\geq 0}^{n + 1} $, the Subset Sum problem is to decide whether there exists $S \subseteq [n]$ such that $\sum_{i \in S} a_i = t$. Bellman (1957) gave a pseudopolynomial time dynamic programming algorithm which solves the Subset Sum in $O(nt)$ time and $O(t)$ space.
In this talk, we present an $\tilde{O}(k \cdot (n+t))$ time deterministic algorithm, which finds the hamming weight of all the realisable sets for a subset sum instance. Our algorithm is parameterized by $k$, which is a given upper bound on the number of realisable sets (i.e. number of solutions, summing exactly $t$). We will also show that Subset Sum problem with a unique solution is already NP-hard, under randomized reduction. This makes the regime of parametrized algorithms, in terms of $k$, very interesting.
This is a joint work with Pranjal Dutta and has been published in CALDAM 2022
Coordinates: KD103, 4 PM IST, 20th April 2022
Lattice-based cryptographic schemes have generated much interest in recent years. Their security relies on the computational hardness of computational problems over geometric objects called lattices. These schemes have been used to build advanced cryptographic primitives such as fully homomorphic encryption, and they are believed to be resistant to quantum attacks. Given the recent advancement in quantum technologies, many institutes such as the National Institute of Standards and Technology (NIST) and European Telecommunications Standards Institute (ETSI) have initiated a process for standardization and deployment of lattice-based schemes widely over the next few years.
The security of these schemes crucially relies on the assumption that the best-known algorithms for the corresponding lattice problems cannot be improved. In this talk, I will describe the state of the art of this assumption. More specifically, I will talk about the fine-grained the hardness of the lattice problems in different p-norms. I will also talk about our recent work that shows that it is impossible to get any fine-grained hardness for the lattice problems in the euclidean norm, under a complexity-theoretic assumption.
Coordinates: KD103, 5 PM IST, 14th April 2022
Don’t we all dream of the perfect assistant whom we can just tell what to do and the assistant can figure out how to accomplish the tasks? Formally, given a specification F(X,Y) over the set of input variables X and output variables Y, we want the assistant, aka functional synthesis engine, to design a function G such that (X,Y=G(X)) satisfies F. Functional synthesis has been studied for over 150 years, dating back Boole in 1850’s and yet scalability remains a core challenge. Motivated by progress in machine learning, we design a new algorithmic framework Manthan, which views functional synthesis as a classification problem, relying on advances in constrained sampling for data generation, and advances in automated reasoning for a novel proof-guided refinement and provable verification. On an extensive and rigorous evaluation over 609 benchmarks, we demonstrate that Manthan significantly improves upon the current state of the art, solving 509 benchmarks in comparison to 280, which is the most solved by a state of the art technique. The significant performance improvements, along with our detailed analysis, highlights several interesting avenues of future work at the intersection of machine learning, constrained sampling, and automated reasoning.
The talk is based on following two papers:
Coordinates: 4PM IST, 08th Sept 2021
One of the major motivations to study Arithmetic/Algebraic Circuit Complexity was due to the “slow” progress in proving lower bounds in Boolean Circuit Complexity. Based on some empirical evidence, researchers believed that the power of underlying algebra and the possibility of cancellations could help understand computations on the algebraic setting.
In this talk we shall outline some of the connections between Boolean Circuit Complexity and Algebraic Circuit Complexity. In particular (by building upon some previous work [due to Yao, Beigel & Tarui, Forbes, Kumar & Saptharishi], we shall present a path towards proving stronger (nonuniform) ACC0 lower bounds via proving “functional” lower bounds in the algebraic setting.
Joining Link: Google Meet
Coordinates: - Watch It Here
In view of the lack of strong lower bounds in most branches of complexity theory, some of the research has focused on whether the known approaches can ever lead to strong enough results. In the case of Boolean circuits, Razborov and Rudich (1995) observed that most of the then known lower bounds could be proven using a certain template, which they called the ‘natural proofs’ framework. They further showed that if strong enough One-Way-Functions exist, then no natural proof can lead to a super-polynomial lower bound against Boolean circuits!
Recently, Forbes, Shpilka and Volk (2018) and Grochow, Kumar, Saks and Saraf (2017) independently came up with an algebraic analogue of the natural proofs framework: algebraic or (algebraically) natural proofs. A bit like the Boolean case, both these works show that the existence of algebraic natural proofs for poly-sized algebraic circuits (called VP) directly contradicts a derandomisation assumption about algebraic circuits. However, this assumption is quite non-standard and it is therefore difficult to form any strong opinion about it.
In a recent joint work with Mrinal Kumar (IITB), C. Ramya (TIFR) and Ramprasad Satharishi (TIFR), we show that if there is an exponential separation between the classes VNP (algebraic NP) and VP (algebraic P), then there are no algebraic natural proofs for the class VNP. This is perhaps the only known result about the existence of algebraic natural proofs that follows from standard hardness assumptions. Moreover, this statement also has some interesting consequences for the efficacy of ‘natural lower bound techniques’ in the algebraic world.
In this talk, we will first get to know the framework of algebraic natural proofs, and then discuss our result and its consequences. After that, if the time permits, we will briefly go over the main ideas behind our proof.
Joining Link: Google Meet
Coordinates: - Watch It Here
The recent advances in the design and construction of quantum computers require us to design the cryptosystem secure even after quantum computers. Most of the current day cryptosystem’s security is dependent on the hardness of integer factoring or discrete log problem. Both these are polynomial-time solvable by a quantum computer [Shor, FOCS, 1994]. The quantum resistance and many other lattices’ unique properties make lattice-based cryptosystem the most popular candidate for post-quantum cryptography.
In this talk, we will present a number of exponential time reductions between the Shortest Vector Problem and the Closest Vector Problem over lattices in different ℓp norms (SVPp and CVPp respectively). We will also discuss how theses reductions are useful for the fine grained hardness of the lattice problems.
This is a joint work with Divesh Aggarwal, Yanlin Chen, Zeyong Li and Noah Stephens-Davidowitz.