Dissertations - M Tech (CRS)
Permanent URI for this collectionhttp://164.52.219.250:4000/handle/10263/7285
Dissertation submitted in partial fulfilment of the requirements for the degree of Master of Technology in Cryptology and Security
Browse
Item Project on securing personal health information associated with HIPAA compliance(Indian Statistical Institute, Kolkata, 2021-07) Ramani, ChandanIn this project my aim was to research about how to secure personal health information associated with HIPAA compliance. Firstly, we tried to understand what is HIPAA and how it is connected to health information. Then we tried to find related encryption schemes for health data. Also, it was needed to find authentication schemes in this project. Our goal was to find how to save health data securely in cloud system and how to access that data with proper authentication. All this work was done keeping in mind that everything should be HIPAA compliant.Item Study on the Generation of Pseudo-Random Numbers using Recurrent Neural Network(Indian Statistical Institute, Kolkata, 2021-07) Ravish, AnkitGenerating random numbers is an important task in cryptography as well as computer science. Pseudo-randomness is fundamental to cryptography and it is required for achieving any cryptographic function like encryption, authentication and identification. The quality of a pseudorandom number generators is determined by the randomness which is generated in sequences. In this dissertation, pseudo-random numbers have been generated using recurrent neural network. Initially the neural networks are trained with the sequences of random numbers. These random numbers which will be used as training set for the neural networks model has generated Advanced Encryption Scheme (AES) using counter mode of operations. It is known that the padding vectors which are generated in each operation is random. They are considered to be random as the same sequence occurs after a long interval of time. The neural network which has been considered in this study is Recurrent Neural Network due to its property that it has an internal memory that allows the neural network to remember the historic input and helps it in making decisions by considering current input alongside learning from previous input. The neural networks have been trained using different loss functions. After fitting the model according to the training set, a sequence of predicted values has been obtained from each model. The randomness of these predicted values are checked using NIST test. Lastly, the aim is to compare and show the number of NIST tests passed by the predicted sequences in each model.Item Fault Analysis of the Prince Family of Lightweight Ciphers(Indian Statistical Institute, Kolkata, 2021-07) Kundu, Anup KumarPrivacy is one of the most fundamental aspects of the digital age that we live in. With the advent of the Internet and the advances in both nanoscale electronics and communication technologies, data has become the new oil. And wherever there is data there is a notion of its privacy. Whether data is at rest or in motion, privacy and authenticity have always been the hallmark of modern day communication. Cryptography provides us the necessary tools and primitives that help us achieve among others, the goals of privacy, integrity authenticity in isolation and more recently even simultaneously. While conventional crypto tackles most of the problems efficiently, it has been seen to be particularly, not suitable for resource constrained environments which are being increasingly prevalent in present-day Internet-of-Thing (IoT) environments, RFID tags so on and so forth. This is primarily attributed to the fact that traditional crypto is “heavy-weight” in terms of the computational resources that it demands, be it in terms of chip-area, power-consumption, throughputs etc and hence become unusable or overwhelming for devices that operated on limited resources. This points us in the direction of a new typo of crypto which is referred to as “Lightweight” crypto. Lightweight Cryptographic algorithms are tailored for resource starved settings and hence perform better in such environments. The importance of lightweight crypto is evidence by the on-going multi-year global competition by NIST for standardizing the next generation lightweight authenticated ciphers and presently in the final round. This work consists cryptanalysis of two lightweight block ciphers namely PRINCE, and PRINCEv2 which are based on the SPN design philosophy. PRINCE has been around for some time and is proposed keeping in mind unrolled implementations. PRINCEv2 is the new version of PRINCE which was reported in SAC 2020. In the current work, we introduce a new fault attack on PRINCE based on the random bit-model where faults are injected in the input of 10 th round. The attack is able to uniquely recover the key using 7 faults. It is interesting to see that the random bit-fault model which is a popular fault model has not yet been explored independently on PRINCE. Though Song and Fu [SH13] have explored the random-nibble fault and mentioned the bit-model to be a special case, they actually fail to capture the full scenario. Herein lies the motivation of the current work. We look at the bit-model in isolation and in-depth and conclude that it is more effective both in terms of the point at which the fault is injected as well as the complexity of the resulting DFA. In terms of the point of fault injection it is important to emphasize that in the attack reported in [SH13], the fault is actually injected before/during the SubByte-Inverse operation in the 10 th round which is the last operation of the 10 th round. Thus it will be more appropriate to state the fault injection point to be 10.5 rounds at best instead of 10 rounds as claimed by the authors in [SH13] (Refer Fig. 3.7). We touch upon this aspect in details in the discussion section later in this work. On the contrary, the random bit-flip DFA proposed here actually induces the fault at the input of 10 th round. The work further gives a classification of fault-invariants that are generated at the end of 11 th round due to a random bit-fault at the beginning of 10 th round. Further, PRINCEv2 was introduced with many modifications primarily in the key-schedule to thwart many classical attacks on PRINCE. We investigated PRINCEv2 in the light of the current work and found that PRINCEv2 is equally vulnerable to all attacks reported here. Finally, we look at PRINCE-like ciphers in general and comment on the impact of the -reflection property on the amplification of the scope of fault injection.Item Improving the efficiency of RLWE-based IPFE and its application on privacy-preserving biometrics(Indian Statistical Institute,Kolkata, 2021-07) Adhikary, SupriyaEncryption is a method with which one can securely share data over an insecure channel. The traditional public-key encryption follows an all-or-nothing approach where the receiver is either able to get the whole message using a key or nothing. In functional encryption (FE) it is possible to control the amount of information revealed to the receiver. The emerging use of cloud computing and a massive amount of collected data leaves us with a question of data privacy. For many applications, the regular notion of public-key encryption may be insufficient. For example, a hospital may want to share patients’ private healthcare data with researchers for analytics without disclosing patients’ private information. Functional Encryption can be very useful in such a scenario, where the authority(hospital) can provide a secret key skf to the researchers corresponding to a function f and the researcher can only get the evaluation f (x), so the researchers can compute on patients’ data without violating the privacy of the patients. The idea functional encryption was first introduced in terms of identity-based encryp- tion [6, 41], attribute-based encryption [38] and predicated encryption [22]. All of these extensions and their variants can be unified under the name Functional Encryption for an arbitrary function f . Inner Product Functional Encryption (IPFE) is one of the variants of FE. IPFE has been instantiated based on different assumptions like decisional Diffie- Hellman(DDH), learning with errors (LWE) assumptions [3, 4]. The first IPFE scheme based on RLWE assumption has recently been introduced by Mera et al. [29]. RLWE schemes tend to be efficient but the main bottlenecks in any RLWE scheme are Gaussian sampling and large polynomial multiplication. These are the reasons concerning performance loss in these schemes. Improvements are required to these operations for better performance. Our primary objective in this thesis is two fold (a) Improving the efficiency of RLWE-based IPFE [29]: One of the basic obser- vations that we can have here is that we can run most of the sections in the scheme parallelly without getting any changes in the result. We have used OpenMP to im- plement a multi-threaded implementation of the scheme. This allows this code to run parallelly on multiple cores simultaneously and improve the performance.Another aspect of performance optimization is AVX2 implementation. Intel Advanced Vector Extensions (AVX) is a vector processor for doing single instruction multiple data (SIMD) operations on Intel architecture CPUs. They were first supported by i Intel with the Haswell processor, which shipped in 2013. We propose a fast vectorized polynomial multiplication using intel AVX2. (b) Privacy preserving biometric authentication : We introduce an IPFE-based privacy-preserving biometric authentication protocol. We use the optimized IPFE library developed in this work. We then show the difference between using this IPFE- based protocol and a similar HE-based approach of the protocol.Item Prediction of Rate of Penetration in Drilling(Indian Statistical Institute, Kolkata, 2021-07) Roy, Abir LalDrilling has become an expensive and necessary operation to explore petroleum and natural gases.The goal is to increase drilling speed with minimum cost and at the same time maintaining safety.Predicting rate of penetration (ROP) in real time is very important to optimise drilling cost , since the greater the ROP is, the lesser the drilling cost would be.In this work,the typical extreme learning machine (ELM) and an e cient learning model (Arti cial Neural Network) have been used in ROP prediction.Since the relationship between ROP and the set of parameters a ect ROP is highly non linear, these learning models have been used to capture the non linearity. The models have been built using WITSML Realtime drilling data which is open source data and hence erroneous.Results indicate that both ELM and ANN are competent for ROP prediction, though ELM has higher learning speed compared to ANN.Though the quality of the data set is not upto the mark, still we managed to achieve around 70% and 62% MAPE (Mean absolute percentage error) score for ELM and ANN model respectively.This work will help drilling engineers to predict ROP according to their computation and accuracy demand. Now we shall discuss about a di erent work on computer vision. Sewer pipeline networks have become the main concern of modern municipalities around the world as these networks are too old and they are reaching their design lifetime; meanwhile, increasing environmental and health requirements, growing demands, and tight budgets have all made the problem harder to deal with. In order to prevent severity and costly damage, sewer system con- ditions need to be monitored through a timely and comprehensive periodic assessment. Currently Manual inspection is common practice in the inspection and assessment of sewer networks. Visual inspection requires hundreds of hours of data processing by certi ed inspectors to detect defects (i.e., crack, joint o set, roots, deposit, in ltration, etc.) and assess defect severity (i.e., length, number, consequences, etc.). However, manual inspection used in the assessment of extensive sewer systems is error-prone, subjective, and time-consuming. And due to this sometimes-di erent type of defects in sewer pipe system go undetected until they do a good bit of damage. Here our objective is to automation model development using computer vision techniques for sewer condition assessment and automation of classi cation and detection of di erent type of defects (i.e., Crack, Fracture, Obstruction etc) which are found in sewer pipe system.Item Evaluation of Optical Character Recognition (OCR) accuracy: Supervised and Unsupervised techniques(Indian Statistical Institute, Kolkata, 2021-07) Banerjee, NiladriThis work’s aim is to find an efficient method to measure the Optical Character Recognition (OCR) accuracy in the absence of the ground truth text. To successfully obtain the desired result, initially we have tried some efficient supervised (in the presence of the ground truth text) accuracy measuring techniques. Then we tried some unsupervised (in the absence of the ground truth text) techniques, which is the final goal of our project, and compare their performance with respect to the previously obtained supervised techniques. Our final project goal is to provide an efficient unsupervised accuracy measuring technique which can help us to automate the document analysis process.Item A Catalogue for Provably Secure Symmetric Modes and Primitives(Indian Statistical Institute, Kolkata, 2021-07) Dutta, AnishaModern-day Symmetric-key Cryptology is enriched with numerous contributions including symmetric-key primitives to modes of operation. The approach to design and develop provably secure designs have accelerated the growth of this subject. A lot of encryption, authentication and authenticated encryption modes are available publicly that are provably secure and gives good results in terms of e ciency. But there is not much resource that studies all the modes and conclude about their performance at the same time. This work is intended to study and compare all these modes of operation, both in terms of security (con dentiality and integrity) and e ciency (implementation area and throughput). We took care of di erent security notions and design rationales of compared schemes and generalised them as much as possibleItem Efficient and Secure Access Control for Sensitive Healthcare Data(Indian Statistical Institute, Kolkata, 2021-07) Samanta, AsmitaHealthcare services produce and use a great deal of sensitive personal data. But the fact is that this healthcare data has very high black market value. Now to easily access the healthcare data we can think about an access control server. So if we want to make an accesss control server for healthcare data then it has to be very secure. On the other hand, this data also needs to be easily accessible by the patient itself and authorized care givers. In this thesis we have studied an existing token-based access control solution which is being applied to protect medical data in a hospital and observed its security limitations. After that we modify that model using Multi-Authority CP-ABE, as a building block, to overcome the security limitations. We have proposed two modified models in our paper. Our first model relies on centralized MA-CP-ABE, which is based on composite order bilinear group. Since it is a centralized model, there is an central authority. In my case External IAM plays the role of Central Authority. I have used External IAM and Policy Decision Point as my two attribute authorities. This MA-CP-ABE is computed on a composite order bilinear group. According to the security analysis, my first model is adaptively secure. We have done this security analysis in standard model. Our second model relies on decentralized MA-CP-ABE, which is based on prime order bilinear group. Since it is an decentralized scheme so there is no central authority. Here also I have used External IAM and Policy Decision Point as my two attribute authorities. This MA-CP-ABE is computed on a prime order bilinear group. According to the security analysis, my second model is CPA secure. We have done this security analysis in random oracle model. Our second model is more efficient according to the computation cost than the first model whereas our first model is more efficient according to the communication cost than the second model. We have implemented the decentralized Multi-Authority CP-ABE scheme, which is the building block of our second model, to use in modified Access Control Model. We have implemented the code in Python and used Charm-crypto framework for the implementation. Because of using decryption out-sourcing our final decryption time has become constant, it does not depend on the size of the data consumer’s attribute set or on the number of attributes in access policy. Also we have implemented a modified LSSS in our thesis which is more efficient than Charm’s LSSS. We have also introduced revocation property in the scheme and provided insights on how to implement the whole access control model in this thesis.Item Bulk Categorization Analyse(Indian Statistical Institute, Kolkata, 2021-07) Dhar, ArunavaItem Investigating Applications of The Target Di erence Algorithm in Keccak & Ascon(Indian Statistical Institute, Kolkata, 2021-07) Manna, AsimItem Verifiable e-auction over a Block-Chain(Indian Statistical Institute, Kolkata., 2021-07) Chakraborty, SayantanAuction has been an integral part of trading throughout ages. Sealed-bid e-auction (both first-price and second-price) is considered to be a classic example of Secure Multi-Party Computation problem. In this project, without assuming any role of auctioneer, we try to formulate a protocol in which the bidders jointly compute the highest bid, keeping all the losing bids secret to the bidders. The focus of the work is to study and implement a verifiable sealed-bid e-auction scheme in decentralized settings. We have studies some possible choices of schemes and decides to further work on the SEAL protocol, which eventually is the first decentralized sealed-bid auction protocol to achieve linear computation and communication complexity. The project aims to build a platform which shall take bit-strings as bid inputs from the bidders and output the final result which is maximum among the bids. We have implemented the segments of the protocol using Solidity Language over a Test Ethereum Network and the front end is to be done using HTML Languages. This implementation targets to achieve publicly verifiability without leaking information about the bids form the bidders.Item Physical attacks on CCA-Secure Lattice-based KEM SABER(Indian Statistical Institute, Kolkata, 2021-07) Mondal, PujaNowadays the security of most used public-key algorithms are based on the hardness of one of the following problems : 1. The integer factorization, 2. The elliptic-curve discrete logarithm problem. But these problems can be solved by Shor's algorithm [38] and Proos.Zalka's algorithm [31] on a powerful quantum computer. The relief is that yet there is no quantum computer available. But from the continuous improvement of computer science, we can say that the quantum computer is coming within a few decades. Then to secure the communication, we need many cryptographic schemes, which are quantum secure. That is are not attacked by a powerful quantum computer. For this reason, post-quantum cryptographic schemes are needed. The lattice-based public-key cryptographic schemes Saber[15], Kyber[7], NTRU are secure against attacks from a quantum computer. These schemes are selected in the 3rd round by NIST in the Post Quantum Cryptography standardization program. The security of Saber is based on the Module Learning with Rounding (MLWR) problem [15], which is assumed to be computationally hard problem[2]. Saber.PKE is an IND-CPA secure scheme and can be transformed to a secure against chosenciphertext attacks( IND CCA-secure) by applying well known CCA conversions such as the Fujisaki Okamoto transform[19] . Now the remaining important task is to check the security of implementation of the scheme SABER. Because a perfectly secure scheme is broken if not implemented correctly. For example: RSA is a public-key encryption[17] whose security is based on the hardness of the prime factorization of a large number. We assume that the factorization of a large integer is a hard problem. Till now, there is no e cient factorizing method. But at the RSA Data Security and CRYPTO conferences in 1996, Kocher presented the "Timing Attack" on RSA . To secure a cryptographic scheme, we have to protect it from any possible attacks. Now for the protection, rst of all, we have to analyze the algorithm. And if we see that the scheme is mathematically secure, then we have to analyze the implemented scheme and nd all such i possible points, where we can inject a fault. Then we have to see that whether the injected fault leak some information about the secret. If some information leaks after injecting a fault in the implementation, then we have to put a countermeasure to prevent this fault attack. In this project, rst, we inject a fault in the decapsulation of the CCA secure scheme SABER. After that, we query the decapsulation oracle with constructed dummy ciphertexts ( which may not be valid ciphertext ), then using attack models, we recover the whole secret. To recover the secret, we need to query atmost 3072 number of constructed ciphertext to the decapsulation oracle for the parameter set (n = 256; l = 3; q = 213; p = 210; = 8) of SABER.Item Design and Analysis of Blockchain-based E-voting Protocols(Indian Statistical Institute, Kolkata., 2021-07) Ghosh, SarbajitVoting is the most fundamental cornerstone in the context of representative democracy. With the advent of the internet technologies, various types of e-voting mechanisms have emerged. Building an e-voting system capable of performing as good as a traditional voting system is a very challenging task. On the positive side blockchain technology inherently provides critical security properties to design secure e-voting protocols. In this work, we have studied decentralized boardroom scale e-voting protocols designed by McCorry et al. in 2017. We have extended their work in multiple directions. Our protocol supports (A) arbitrary number of candidates1 ; (B)majority-base countings; (C) voting absentation (D) we also did a theoritical analysis of all the protocols.Item Robust Android Malware Detection with CTGAN(Indian Statistical Institute, Kolkata, 2024-06) Sarkar, BivashIn this paper our main objective is to make a robust malware detector system by enhancing it’s ability to detect malicious applications. Machine learning based model has been used to detect or classify malware and benign samples, while the malware attackers have strong motivation to attack such ML based algorithms. Malware attackers usually have no access to the detailed structures and parameters of the machine learning models used by malware detection systems, and therefore they can only perform black-box attacks. With the proliferation of malware threats, the development of robust detection methods is imperative. Generative Adversarial Networks (GANs) have recently emerged as a promising avenue for generating synthetic data, offering potential applications in augmenting datasets for malware detection. This paper presents a comparative analysis of contemporary GANs with Conditional Tabular GANs (CTGAN) in the context of detecting malware and benign samples generated through GANs. Through extensive experimentation on diverse datasets, including both benign and malicious samples, we demonstrate that CTGAN outperforms contemporary GAN architectures in generating synthetic data that closely resembles real-world malware behaviors. Our evaluation metrics encompass various aspects of detection accuracy, including precision, recall, F1-score, TPR, AUC-ROC, confusion matrix and generator and discriminator loss. Additionally, we analyze the robustness of the generated samples against state-of-the-art malware detection techniques. The results indicate that CTGAN exhibits superior performance in producing synthetic malware instances that challenge existing detection methods like MaLGAN and LSGAN, thereby showcasing its potential for enhancing the efficacy of malware detection systems. CTGAN enhances adversarial training with around 20% when compared with untrained detector. This study contributes to the advancement of GAN-based approaches in cybersecurity and underscores the significance of leveraging synthetic data generation techniques for improving malware detection capabilities.Item Protecting the Unbalanced Oil and Vinegar Signature Scheme against Side-channel Attack(Indian Statistical Institute, Kolkata, 2024-06) Ojha, Uttam KumarWith the recent development of quantum computing, there is an urge for Post- Quantum Cryptography(PQC). The National Institute of Standards and Technology( NIST) initiated a public process to standardize PQC algorithms to address this issue in 2016. To search for new signature schemes with diverse hardness problems, short signature sizes and fast verification, NIST called for additional digital signature schemes for the PQC in 2022. Based on multivariate cryptography, the Unbalanced Oil and Vinegar(UOV) signature scheme is a candidate for this additional round. This scheme has stood out for two decades of cryptanalysis and has a short signature size and fast verification. We believe this is a potential candidate for this round. As usual, this scheme is mainly designed to resist mathematical attacks; however, deploying this scheme in an actual device leaks unintended information through side-channels such as power consumption. Side-channel analysis helps to exploit those unintended information and recover the secrets of the scheme. Recently, a few attacks have been shown using correlation power analysis in this scheme. Masking is a well-known and provably secure countermeasure against such attacks. In this thesis, we describe the first masked implementation of the UOV scheme. We also produce security proof of our implementation in the probing model.Item Security Analysis of Cryptographic Primitives Used in Blockchain(Indian Statistical Institute, Kolkata, 2024-06) Das, AtosiThis thesis conducts a comprehensive security analysis of cryptographic primitives within the context of blockchain technology, focusing on MD5, Elliptic Curve Cryptography (ECC), and digital signatures. The first objective was to evaluate the security of MD5. The analysis revealed that MD5, once widely used, is vulnerable to collision attacks, leading to its deprecation in favor of more secure hash functions like SHA-256 for blockchain applications.Item Exploring the Underlying Assumptions of Lattice Constructions : A Theoretical Investigation(Indian Statistical Institute, Kolkata, 2024-07) Datta, ArkapravaOwing to its adaptability in cryptographic protocols and possible defence against quantum attacks, lattice-based cryptography has become a very attractive topic. This survey explores the fundamental hard problems in lattice theory, such as the Shortest Vector Problem (SVP), the Closest Vector Problem (CVP), and the Learning With Errors (LWE) problem, which form the cornerstone of latticebased cryptosystems. We explore the intricate mathematical structures and specifics of all of these problems, highlighting their computational difficulty and importance. In addition, we look at the idea of “crypto dark matter,” which refers to cryptographic structures and protocols that function outside of the accepted frameworks for cryptographic analysis and application. Our aim is to gain knowledge regarding the incorporation of lattice-based hard problems into the crypto dark matter framework through a review of the literature and uncover new dimensions of security and functionality that challenge traditional approaches. This analysis emphasises the application of current developments in latticebased cryptography in building secure cryptographic primitives while o↵ering a thorough overview of the field. In the era of quantum computing, our studies highlight the importance of lattice-based hard problems as a frontier for innovative cryptography research as well as a solid foundation for strong cryptographic systems. The aim of this study is to help researchers and practitioners better understand how advanced cryptographic applications interact with lattice theory, which will ultimately lead to the development of cryptographic solutions that are more e↵ective and secure.Item Improving large-Scale Simulation efficiency of Shor’s Quantum Factoring Algorithm(Indian Statistical Institute, Kolkata, 2024-07) Pal, PushpendraShor’s factoring algorithm is one of the most eagerly awaited applications of quantum computing, promising to revolutionise fields such as cryptography by efficiently factoring large integers, a task that is computationally intensive for classical computers. Currently, the limited capabilities of today's quantum computers allow for testing Shor’s algorithm only on very small numbers, as they do not yet possess the qubit count or error rates necessary to handle larger, more complex computations. In this study, we demonstrate how large GPU-based supercomputers can be leveraged to evaluate the performance of Shor’s algorithm on numbers that are beyond the reach of current and near-term quantum hardware. By simulating the algorithm on powerful classical machines, we can gain insights into its behaviour and performance under various conditions. Initially, we examine Shor’s original factoring algorithm and find that, despite theoretical success probabilities being quite low, the average success rates are significantly higher due to frequent "lucky" cases. These are instances where factorization succeeds even when the sufficient conditions of the algorithm are not met. This phenomenon suggests that practical implementations of Shor’s algorithm may benefit from a higher-than-expected probability of success. We then explore a robust post-processing method designed to increase the success probability of Shor’s algorithm to nearly one with just a single execution. This method involves additional classical computational steps that refine the output of the quantum algorithm, making the overall factorization process much more reliable. Finally, we analyse the effectiveness of this post-processing method in the presence of typical quantum processing hardware errors, such as decoherence and gate errors. Our findings show that the quantum factoring algorithm displays a unique form of universality and resilience against various errors. This resilience suggests that Shor’s algorithm, when combined with efficient post-processing techniques, can remain viable even in less-than-ideal quantum hardware environments. The largest semiprime we have factored using Shor’s algorithm on a GPU-based supercomputer, without prior knowledge of the factors, represents a significant milestone. It poses a formidable challenge for the quantum computing community to exceed without resorting to oversimplifications on any quantum computing device. This achievement underscores the potential of classical-quantum hybrid approaches in advancing our understanding and implementation of quantum algorithms.Item Implementation and Performance Testing of Post-Quantum Algorithms in Hyperledger Fabric(Indian Statistical Institute, Kolkata, 2024-07) Swarnakar, MonishaThe potential of distributed ledger technologies, such as blockchain, to create responsible and transparent linkages across a range of application domains has garnered a lot of interest. These methods build redundant networks via hash functions, digital signatures, and asymmetric cryptography. However, attacks employing quantum computers that take advantage of Grover and Shor’s algorithms can compromise the security of the current blockchain architecture.Item
- «
- 1 (current)
- 2
- 3
- »
