Dissertations - M Tech (CRS)
Permanent URI for this collectionhttp://164.52.219.250:4000/handle/10263/7285
Dissertation submitted in partial fulfilment of the requirements for the degree of Master of Technology in Cryptology and Security
Browse
Item Attacking ML inference via malicious MPC party(Indian Statistical Institute, Kolkata, 2024-07) Paul, SaswataSecure Multi Party Computation (MPC) in a three-party honest majority setting is currently the most used cryptographic primitive for running machine learning algorithms in a privacy-preserving manner. Although MPC typically operates with integers, it becomes necessary to extend its functionality to support machine learning algorithms, which involve arithmetic operations on decimal numbers. To address this requirement, fixed-point arithmetic is used for running machine learning algorithms. Consequently, a secure truncation protocol is essential after every multiplication to preserve precision. Recently a maliciously secure truncation protocol named MaSTer was proposed. This protocol however lets the malicious adversary add some error with high probability to each instantiation of multiplication without getting detected. This project aims to design an attack exploiting this vulnerability in machine learning inference from the perspective of a malicious MPC party, with a conclusion dependent on fixed-point precision. The attack method we have chosen is attacking with adversarial examples. We have given an attack strategy with a weaker assumption and discussed the results of this strategy. We have mentioned the idea of generalizing this strategy for a more general case.Item Bulk Categorization Analyse(Indian Statistical Institute, Kolkata, 2021-07) Dhar, ArunavaItem A Catalogue for Provably Secure Symmetric Modes and Primitives(Indian Statistical Institute, Kolkata, 2021-07) Dutta, AnishaModern-day Symmetric-key Cryptology is enriched with numerous contributions including symmetric-key primitives to modes of operation. The approach to design and develop provably secure designs have accelerated the growth of this subject. A lot of encryption, authentication and authenticated encryption modes are available publicly that are provably secure and gives good results in terms of e ciency. But there is not much resource that studies all the modes and conclude about their performance at the same time. This work is intended to study and compare all these modes of operation, both in terms of security (con dentiality and integrity) and e ciency (implementation area and throughput). We took care of di erent security notions and design rationales of compared schemes and generalised them as much as possibleItem Cloud-assisted Multi-Channel Data Broadcasting(Indian Statistical Institute, Kolkata, 2025-07) Maji, DebabrataThe convergence of Internet of Things (IoT) and cloud computing has transformed technology, impacting commerce, industrial production, data management, etc. Multi- Channel Broadcast Encryption (MCBE), first introduced by Phan et al. (ASIACCS 2013), is a cryptographic encryption primitive used for both IoT and Cloud that permits a sender to e!ciently and securely encrypt several messages for di”erent groups of receivers. After thoroughly exploring the existing literature, we observe that none achieves the robust provable security within the standard model. This paper addresses this gap, aiming to achieve adaptive INDistinguishable under full-IDentity Chosen-Ciphertext Attack (IND-ID-CCA) security by constructing an e!cient identity-based MCBE without the Random Oracle Model (ROM). Our construction not only attains communication bandwidth of O(μ)-size but also maintains constant-size overhead storage without any security vulnerabilities. Here, μ represents the number of messages. This is the first protocol proven to be adaptive IND-ID-CCA secure under the standard Decisional Bilinear Di!e-Hellman Type-3 (DBDH-3) assumption in public-key settings without any random oracles. Moreover, practical implementation data reveals an optimal decryption algorithm, taking a mere 0.0048 seconds on IoT devices, demonstrating real-world applicability. Furthermore, our proposed design is highly e!cient as opposed to the other existing works, as shown by implemental and graphical data.Item Comparative Analysis on Different Feature Selection(Indian Statistical Institute, Kolkata, 2024-07) Goswami, SantanuIn this research, we propose a comprehensive framework for uncovering hidden patterns, selecting optimal features, and reducing dimensionality in large datasets, particularly focusing on 10K x 10K dimensional data. Traditional methods often struggle to efficiently handle such vast datasets due to computational constraints and information overload. To address this challenge, we introduce three innovative approaches leveraging deep neural networks (DNNs) and recurrent neural networks (RNNs) to enhance pattern identification, feature selection, and dimensionality reduction. Firstly, we develop a DNN-based framework tailored to identifying hidden patterns within extensive datasets. By harnessing the representational power of deep neural networks, our framework systematically uncovers intricate relationships and structures among observations, allowing for the extraction and preservation of unique patterns for future use. Secondly, we propose an optimal feature selection framework designed to efficiently navigate through the entire feature set and identify the most informative subset. Leveraging advanced optimization techniques, our approach intelligently selects features that maximize predictive performance while minimizing redundancy, thus enhancing model interpretability and computational efficiency. Thirdly, we introduce an autoencoder-based dimension reduction method aimed at effectively reducing the dimensionality of the dataset without sacrificing crucial information. By employing the encoding phase of an autoencoder architecture, we compress the input data into a lower-dimensional latent space, significantly reducing the number of features. Notably, our approach preserves the essential characteristics of the original data, ensuring minimal information loss. Lastly, we propose utilizing RNNs/LSTMs as an alternative to Markovian transition models, particularly addressing the limitations associated with the "memoryless" property. By harnessing the sequential nature of RNNs, our framework enables the generation of state transition probabilities with greater user control and flexibility, making it well-suited for real-life applications where memory and context play crucial roles.Overall, our proposed framework offers a comprehensive solution for efficiently analyzing large-scale datasets, empowering researchers and practitioners to extract meaningful insights, make informed decisions, and advance various domains, including finance, healthcare, and engineering.Item Design and Analysis of Blockchain-based E-voting Protocols(Indian Statistical Institute, Kolkata., 2021-07) Ghosh, SarbajitVoting is the most fundamental cornerstone in the context of representative democracy. With the advent of the internet technologies, various types of e-voting mechanisms have emerged. Building an e-voting system capable of performing as good as a traditional voting system is a very challenging task. On the positive side blockchain technology inherently provides critical security properties to design secure e-voting protocols. In this work, we have studied decentralized boardroom scale e-voting protocols designed by McCorry et al. in 2017. We have extended their work in multiple directions. Our protocol supports (A) arbitrary number of candidates1 ; (B)majority-base countings; (C) voting absentation (D) we also did a theoritical analysis of all the protocols.Item Designing Algorithm for Lightweight Stream Cipher(Indian Statistical Institute, Kolkata, 2024-07) Samuel, SunnyThe role of embeddable cryptographic processors in revolutionizing defense communications for the Indian Navy bears immense significance. These processors serve as catalysts for a diverse range of novel applications critical to naval operations, encompassing tailored smartphones and robust tablet computers engineered for frontline tactical deployment. Moreover, they facilitate the establishment of secure tactical Wi-Fi networks, unmanned vehicle control systems, and real-time targeting capabilities, thereby bolstering operational efficacy and security. Moreover, this technological evolution expedites the adoption of modernized cryptography, promising to furnish secure wireless computing solutions even in the most arduous maritime environments. In alignment with the Indian Navy’s modernization endeavors, a concerted attempt is underway to harness commercial o↵-the-shelf (COTS) cryptographic algorithms and processing hardware. This strategic approach not only fortifies resilience against technological obsolescence but also bolsters support for network-centric operations, streamlines data dissemination and sharing, and advances interoperability with allied naval forces. In response to these imperatives, the primary focus of the thesis is to develop a novel lightweight stream cipher algorithm expressly tailored for utilization within the Indian Navy, specifically for facilitating data communication among computers interconnected via internal LAN. Drawing inspiration from the FASTA, PASTA Lightweight Cryptography (LWC) algorithms and the esteemed NIST CEASER competition awardee, the ASCON family.Item Development of a Comprehensive a Credit Risk Model to Predict Expected Loss in Individual Lending(Indian Statistical Institute, Kolkata, 2024-07) Khutia, DishaAbstractItem Differential Privacy Enabled Deep Skin Image Classification Model Development(Indian Statistical Institute, Kolkata, 2025-07) Mandal, Prasun KumarAbstract In the era of big data, the explosive growth in data volume has significantly accelerated the development of deep learning. Deep learning is the most promising area of AI, yielding significant advancements in medical image classification. However, healthcare data contains important sensitive information and so privacy and security are crucial to preventing unauthorized access. Note that there are several data protection rules from multiple regulations to penalize any kind of data security violation, for example, the data protection principles (Article 5.1-2) and the data protection by design and by default (Article 25) of the General Data Protection Regulation from the European Union. It is mandatory to follow such data regulations in developing and deploying deep models. Traditional deep learning models are vulnerable to several types of attacks, including membership inference attacks, where an adversary determines whether a specific data point was used in training; model extraction attacks, where attackers attempt to replicate the functionality of a trained model and reconstruction attacks, which aim to recover original training data from model outputs. To mitigate data privacy leakage in deep learning models, this dissertation will focus on development of “Differential Privacy enabled deep model” that can deal with the privacy leakage from the trained model. In this research primarily gradient clipping-based deep optimization algorithms (such as DP-SGD, DP-Adam) will experiment with. Automated classification of dermatological images will be the chosen application field for this research. Literature shows that several deep models exist which deal with dermatological image analysis and produce promising performance. However, the performance drop has not yet been explored adequately when such a model was trained with an optimization algorithm that preserves differential privacy. A number of deep neural networks is experimented with to assess performance degradation with the chosen secure training mechanism. Finally, this dissertation aims to develop a novel technique to build differential privacy enabled skin model. This dissertation is utilizing publicly available dermatological image datasets like ISIC 2018.Item Dynamic Sparsification in Secure Gradient Aggregation for Federated Learning(Indian Statistical Institute, Kolkata, 2025-07-23) Samanta, BikashSecure aggregation is a critical component of privacy-preserving federated learning. However, existing fixed-sparsity approaches often incur unnecessary communication overhead. We present DynamicSecAgg, a novel framework that introduces dynamic sparsity while preserving coordinate-level privacy. Our method achieves significant improvements in communication efficiency while maintaining — and in some cases improving — model accuracy across both IID and non-IID user distributions. The framework maintains information-theoretic privacy guarantees via adaptive gradient thresholding and polynomial-based aggregation, proving particularly effective under heterogeneous data settings. These results establish dynamic sparsity as a key optimization for efficient and privacy-preserving federated learning.Item Efficient and Secure Access Control for Sensitive Healthcare Data(Indian Statistical Institute, Kolkata, 2021-07) Samanta, AsmitaHealthcare services produce and use a great deal of sensitive personal data. But the fact is that this healthcare data has very high black market value. Now to easily access the healthcare data we can think about an access control server. So if we want to make an accesss control server for healthcare data then it has to be very secure. On the other hand, this data also needs to be easily accessible by the patient itself and authorized care givers. In this thesis we have studied an existing token-based access control solution which is being applied to protect medical data in a hospital and observed its security limitations. After that we modify that model using Multi-Authority CP-ABE, as a building block, to overcome the security limitations. We have proposed two modified models in our paper. Our first model relies on centralized MA-CP-ABE, which is based on composite order bilinear group. Since it is a centralized model, there is an central authority. In my case External IAM plays the role of Central Authority. I have used External IAM and Policy Decision Point as my two attribute authorities. This MA-CP-ABE is computed on a composite order bilinear group. According to the security analysis, my first model is adaptively secure. We have done this security analysis in standard model. Our second model relies on decentralized MA-CP-ABE, which is based on prime order bilinear group. Since it is an decentralized scheme so there is no central authority. Here also I have used External IAM and Policy Decision Point as my two attribute authorities. This MA-CP-ABE is computed on a prime order bilinear group. According to the security analysis, my second model is CPA secure. We have done this security analysis in random oracle model. Our second model is more efficient according to the computation cost than the first model whereas our first model is more efficient according to the communication cost than the second model. We have implemented the decentralized Multi-Authority CP-ABE scheme, which is the building block of our second model, to use in modified Access Control Model. We have implemented the code in Python and used Charm-crypto framework for the implementation. Because of using decryption out-sourcing our final decryption time has become constant, it does not depend on the size of the data consumer’s attribute set or on the number of attributes in access policy. Also we have implemented a modified LSSS in our thesis which is more efficient than Charm’s LSSS. We have also introduced revocation property in the scheme and provided insights on how to implement the whole access control model in this thesis.Item Efficient SIMD based Implementation of Xoodyak(Indian Statistical Institute, Kolkata, 2025-07-11) Biswas, SohamModern computing devices—particularly in the domains of the Internet of Things (IoT), mobile computing, and embedded systems—often operate under severe resource constraints in terms of processing power, memory (RAM/ROM), bandwidth, and battery life. Devices such as IoT sensors, smart cards, medical implants, RFID tags, and wearable systems typically rely on low-power hardware, including 8-bit microcontrollers with only a few kilobytes of memory. Conventional cryptographic algorithms are frequently unsuitable for such environments, as they may consume excessive power, introduce unacceptable latency, or fail to execute altogether. Lightweight cryptography addresses these challenges by providing cryptographic primitives specifically designed to operate efficiently on constrained hardware. With the rapid growth of IoT, billions of low-power devices are being deployed annually, all of which require fundamental security services such as encryption for data privacy, authentication for identity verification, and integrity protection to detect tampering. In response, international standardization bodies such as NIST and ISO have initiated efforts to define lightweight cryptographic standards. Notably, NIST’s Lightweight Cryptography Project aims to standardize algorithms that offer an effective balance between security and performance in resource-limited environments. Xoodyak is a modern lightweight cryptographic scheme developed for constrained platforms including IoT devices, embedded systems, and other resource-limited applications. It supports authenticated encryption, hashing, and pseudo-random number generation within a compact and efficient design, making it well suited for environments with strict limitations on memory, power, and computational capacity. Xoodyak was designed by Guido Bertoni, Joan Daemen, Michael Peeters, and Gilles Van Assche, who are also among the creators of Keccak (SHA-3). The scheme is built around the Xoodoo permutation, from which it derives its name, and was submitted to NIST’s Lightweight Cryptography Project, where it was recognized for its strong security properties and efficient performance across diverse platforms. Although Xoodyak is highly efficient on 8-bit, 16-bit, and 32-bit microcontrollers due to its compact code size and reliance on a single permutation for multiple cryptographic services, its design also enables a high degree of parallelism. This characteristic makes it suitable for deployment on powerful server-class processors that manage large numbers of constrained devices. In this work, we explore SIMD-based implementations of Xoodyak on modern Intel processors supporting AVX2 and AVX-512 instruction sets. While the eXtended Keccak Code Package (XKCP) provides up to 16-way parallelization, we investigate alternative SIMD parallelization paradigms capable of executing up to 512 parallel instances simultaneously.Item Enhancing Text to SQL Generation with Dynamic Vector Search(Indian Statistical Institute, Kolkata, 2024-07) Mondal, SoumyadweepGenerating accurate SQL from natural language questions (text-to-SQL) is a longstanding challenge due to the complexities involved in understanding user queries, comprehending database schemas, and generating SQL statements. Traditional text-to-SQL systems have utilized human-engineered solutions and deep neural networks. More recently, pre-trained language models (PLMs) have been employed for text-to-SQL tasks, showing promising results. However, as modern databases and user queries become increasingly complex, the limited comprehension capabilities of PLMs can lead to incorrect SQL generation. This necessitates sophisticated and tailored optimization methods, which restrict the applicability of PLM-based systems. In contrast, large language models (LLMs) have demonstrated significant advancements in natural language understanding as their scale increases. This thesis explores the integration of LLMs into text-to-SQL systems, highlighting unique opportunities, challenges, and solutions. We propose a novel approach that leverages examples similar to user queries, allowing the model to better understand and generate accurate SQL. This work provides a comprehensive review of LLM-based text-to-SQL systems, outlining current challenges and the evolutionary process of the field. We introduce datasets and metrics designed for evaluating text-to-SQL systems. Finally, we discuss remaining challenges and propose future directions for research in this domain.Item Evaluation of Optical Character Recognition (OCR) accuracy: Supervised and Unsupervised techniques(Indian Statistical Institute, Kolkata, 2021-07) Banerjee, NiladriThis work’s aim is to find an efficient method to measure the Optical Character Recognition (OCR) accuracy in the absence of the ground truth text. To successfully obtain the desired result, initially we have tried some efficient supervised (in the presence of the ground truth text) accuracy measuring techniques. Then we tried some unsupervised (in the absence of the ground truth text) techniques, which is the final goal of our project, and compare their performance with respect to the previously obtained supervised techniques. Our final project goal is to provide an efficient unsupervised accuracy measuring technique which can help us to automate the document analysis process.Item Exploring the Underlying Assumptions of Lattice Constructions : A Theoretical Investigation(Indian Statistical Institute, Kolkata, 2024-07) Datta, ArkapravaOwing to its adaptability in cryptographic protocols and possible defence against quantum attacks, lattice-based cryptography has become a very attractive topic. This survey explores the fundamental hard problems in lattice theory, such as the Shortest Vector Problem (SVP), the Closest Vector Problem (CVP), and the Learning With Errors (LWE) problem, which form the cornerstone of latticebased cryptosystems. We explore the intricate mathematical structures and specifics of all of these problems, highlighting their computational difficulty and importance. In addition, we look at the idea of “crypto dark matter,” which refers to cryptographic structures and protocols that function outside of the accepted frameworks for cryptographic analysis and application. Our aim is to gain knowledge regarding the incorporation of lattice-based hard problems into the crypto dark matter framework through a review of the literature and uncover new dimensions of security and functionality that challenge traditional approaches. This analysis emphasises the application of current developments in latticebased cryptography in building secure cryptographic primitives while o↵ering a thorough overview of the field. In the era of quantum computing, our studies highlight the importance of lattice-based hard problems as a frontier for innovative cryptography research as well as a solid foundation for strong cryptographic systems. The aim of this study is to help researchers and practitioners better understand how advanced cryptographic applications interact with lattice theory, which will ultimately lead to the development of cryptographic solutions that are more e↵ective and secure.Item Fault Analysis of the Prince Family of Lightweight Ciphers(Indian Statistical Institute, Kolkata, 2021-07) Kundu, Anup KumarPrivacy is one of the most fundamental aspects of the digital age that we live in. With the advent of the Internet and the advances in both nanoscale electronics and communication technologies, data has become the new oil. And wherever there is data there is a notion of its privacy. Whether data is at rest or in motion, privacy and authenticity have always been the hallmark of modern day communication. Cryptography provides us the necessary tools and primitives that help us achieve among others, the goals of privacy, integrity authenticity in isolation and more recently even simultaneously. While conventional crypto tackles most of the problems efficiently, it has been seen to be particularly, not suitable for resource constrained environments which are being increasingly prevalent in present-day Internet-of-Thing (IoT) environments, RFID tags so on and so forth. This is primarily attributed to the fact that traditional crypto is “heavy-weight” in terms of the computational resources that it demands, be it in terms of chip-area, power-consumption, throughputs etc and hence become unusable or overwhelming for devices that operated on limited resources. This points us in the direction of a new typo of crypto which is referred to as “Lightweight” crypto. Lightweight Cryptographic algorithms are tailored for resource starved settings and hence perform better in such environments. The importance of lightweight crypto is evidence by the on-going multi-year global competition by NIST for standardizing the next generation lightweight authenticated ciphers and presently in the final round. This work consists cryptanalysis of two lightweight block ciphers namely PRINCE, and PRINCEv2 which are based on the SPN design philosophy. PRINCE has been around for some time and is proposed keeping in mind unrolled implementations. PRINCEv2 is the new version of PRINCE which was reported in SAC 2020. In the current work, we introduce a new fault attack on PRINCE based on the random bit-model where faults are injected in the input of 10 th round. The attack is able to uniquely recover the key using 7 faults. It is interesting to see that the random bit-fault model which is a popular fault model has not yet been explored independently on PRINCE. Though Song and Fu [SH13] have explored the random-nibble fault and mentioned the bit-model to be a special case, they actually fail to capture the full scenario. Herein lies the motivation of the current work. We look at the bit-model in isolation and in-depth and conclude that it is more effective both in terms of the point at which the fault is injected as well as the complexity of the resulting DFA. In terms of the point of fault injection it is important to emphasize that in the attack reported in [SH13], the fault is actually injected before/during the SubByte-Inverse operation in the 10 th round which is the last operation of the 10 th round. Thus it will be more appropriate to state the fault injection point to be 10.5 rounds at best instead of 10 rounds as claimed by the authors in [SH13] (Refer Fig. 3.7). We touch upon this aspect in details in the discussion section later in this work. On the contrary, the random bit-flip DFA proposed here actually induces the fault at the input of 10 th round. The work further gives a classification of fault-invariants that are generated at the end of 11 th round due to a random bit-fault at the beginning of 10 th round. Further, PRINCEv2 was introduced with many modifications primarily in the key-schedule to thwart many classical attacks on PRINCE. We investigated PRINCEv2 in the light of the current work and found that PRINCEv2 is equally vulnerable to all attacks reported here. Finally, we look at PRINCE-like ciphers in general and comment on the impact of the -reflection property on the amplification of the scope of fault injection.Item Federated Learning Using Fully Homomorphic Encryption(Indian Statistical Institute, Kolkata, 2025-07) Kuddus, Sk GolamTraditional machine learning approaches require centralizing data for training, which raises significant privacy concerns when dealing with sensitive information. Federated learning (FL) addresses this by keeping data local and enabling multiple users to collaboratively train a shared machine learning model. In spite of this, FL remains vulnerable to inference attacks, as sensitive information can still be extracted from the model’s learned parameters. While traditional privacy-enhancing techniques such as di!erential privacy introduce noise to model updates to obscure individual data points, they often present a fundamental trade-o! between privacy and utility. Furthermore, these approaches still carry risks of data leakage if implementation is flawed or adversaries possess sophisticated attack capabilities. To address these limitations, we propose a novel federated learning framework that integrates Homomorphic Encryption and Secret Sharing to provide robust privacy guarantees. Our approach ensures that both raw data and model updates remain confidential throughout the learning process. By enabling computations on encrypted data, our framework allows the aggregation server to perform model updates without ever accessing plaintext information. We evaluate our framework on the CIFAR10 and MNIST handwritten digit classification dataset, demonstrating that it achieves comparable accuracy to traditional FL while providing substantially stronger privacy protections. Performance analysis shows that our approach introduces acceptable computational overhead, making it practical for real-world applications. The framework is especially valuable in sensitive domains such as healthcare, defence, finance, and personal monitoring systems where data confidentiality is paramount. Our contribution advances the state of the art in privacy-preserving machine learning by o!ering a comprehensive solution that maintains utility while providing cryptographic privacy guarantees that protect against both honest-but-curious aggregators and potential adversaries.Item From zero to HEro: zkSNARKs proof construction with HE(Indian Statistical Institute, Kolkata, 2024-07) Pal, PritamIn recent times, the development of the zkSNARKs protocols opens up many applications to prove the authenticity of the data, computations and also the sender without revealing the secret data with very little communication and verification cost. However, resource-constrained devices such as security cameras, mobile phones, and sensors, do not have enough memory and computation power to generate the proof. Now, outsourcing zkSNARK-proof construction leads to privacy concerns as cloud providers may learn secret information. Different from the collaborative proof generation over distributed servers [28, 23], we discuss an approach using fully homomorphic encryption to delegate the proof construction securely to the cloud server. Generating the proof of a circuit, we need to commit the polynomials which represent the constraints of the circuit. If the circuit contains n constraints, we apply the commitment scheme O(n) times. Therefore we have focused on the KZG polynomial commitment scheme which is common in most zkSNARK protocols. Now, the approach to delegate computation of the commitment generation to the cloud server contains the precomputation of elliptic curve points which results client’s high memory usage. We have presented the idea of using PIR protocols such as Vectorized BatchPIR and SimplePIR, to retrieve the precomputed points from the cloud server which reduces the user’s memory usage. We have marked some difficulties we faced with the implementation and future possibilities for improvement.Item Item Image Search(Indian Statistical Institute, Kolkata, 2024-07) Mondal, SubrataWith the rapid increase in digital images, it has become essential to have advanced systems to find specific images quickly from large collections. Traditional methods that depend on text descriptions often fail because tagging images manually is time-consuming and subjective. This project uses deep learning to create an efficient image search system for a dataset of about approximately 5000 printing images.Transfer Learning technique has been implemented in this work. Transfer learning is an ambitious task, but it results in impressive outcomes for identifying distinct patterns in tiny datasets of approximately 5000 images of printing images from our web site ’ARC Print’. The goal is to produced best feature vectors that capture the important details of each image, allowing us to search based on content rather than text. We tested the system for accuracy and speed, showing that it works well and is efficient. Feedback from management also confirms that the system is practical and useful. The results indicate that our method is much better than traditional ones, providing quick and accurate search results based on image content.This project demonstrates the power of deep learning in image search, and it can be used in many areas specially in online shopping. The proposed model achieved 89 % accuracy and based on our findings,the proposed system can help to enhance the user experience on our website far better.In the future, we aim to improve the system further and explore more applications, highlighting the importance of advanced machine learning in handling large collections of images.
- «
- 1 (current)
- 2
- 3
- »
