Skip to content

Tech Law Forum @ NALSAR

A student-run group at NALSAR University of Law

Menu
  • Home
  • Newsletter Archives
  • Blog Series
  • Editors’ Picks
  • Write for us!
  • About Us
Menu

Category: Right to Privacy

IT AMENDMENT RULES 2022: An Analysis of What’s Changed

Posted on November 25, 2022April 30, 2025 by Tech Law Forum NALSAR

[This post is authored by Sohina Pawah, a second-year student at the NALSAR University of Law, who is also an Editor for the TLF]

INTRODUCTION

Back in June 2022, the Ministry of Electronics and Information Technology (“MeitY”) had first released the proposed amendments to the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021 (“IT Rules 2021”) for public consultation. Recently, the MeitY notified the Amendments to Parts I and II of the IT Rules 2021 by introducing the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Amendment Rules, 2022 (“IT Amendment Rules, 2022”). The IT Amendment Rules 2022 aim at the regulation of social media intermediaries by increasing the burden of their compliance, and ensuring that the safe harbours provided to them are not abused. On the whole, the Rules aim at strengthening the protective framework for the “netizens’ interests” by prioritising their fundamental rights under Articles 14,19, and 21 of the Indian Constitution.

Read more

Facial Recognition and Data Protection: A Comparative Analysis of laws in India and the EU (Part I)

Posted on April 3, 2021December 27, 2024 by Tech Law Forum NALSAR

[This two-part post has been authored by Riddhi Bang and Prerna Sengupta, second year students at NALSAR University of Law, Hyderabad. Part II can be found here]

With the wave of machine learning and technological development, a new system that has arrived is the Facial Recognition Technology (FRT). From invention to accessibility, this technology has grown in the past few years. Facial recognition comes under the aegis of biometric data which includes distinctive physical characteristics or personal traits of a person that can be used to verify the individual. FRT primarily works through pattern recognition technology which detects and extracts patterns from data and matches it with patterns stored in a database by creating a biometric ‘template’. This technology is being increasingly deployed, especially by law enforcement agencies and thus raises major privacy concerns. This technology also attracts controversy due to potential data leaks and various inaccuracies. In fact, in 2020, a UK Court of Appeal ruled that facial recognition technology employed by law enforcement agencies, such as the police, was a violation of human rights because there was “too broad a discretion” given to police officers in implementing the technology. It is argued that despite the multifarious purposes that this technology purports to serve, its use must be regulated.

Read more

Artificial Intelligence is a Road Map to Transmogrification of Legal Industry

Posted on September 30, 2019 by Tech Law Forum NALSAR

This piece, taking an optimistic view of the use of AI in the legal industry, has been authored by Priyal Agrawal and Laxmi Rathore. They are currently in their 3rd year at the Kirit P. Mehta School of Law, NMIMS, Mumbai.

“In the long term, artificial intelligence and automation are going to be taking over so much of what gives humans a feeling of purpose.” – Matt Bellamy

Artificial intelligence is a computer-based system that performs tasks, which typically require human intelligence. In this process, computers use rules to analyze data, study patterns and gather insights from the data. Artificial Intelligence companies persistently find ways of evolving technology that will manage arduous tasks in various sectors for enhanced speed and accuracy. Artificial Intelligence has transformed nearly all the professional sectors including the legal sector. It is finding its way into the legal profession and there is a plethora of software solutions available, which can substitute the humdrum and tedious work done by lawyers. In the legal profession, the changes are diverse where software solutions have outweighed paperwork, documentation and data management.

This blog analyzes the use of AI in the legal industry. It describes various AI tools which are used in the legal sector, and gives an insight into the use of AI in the Indian Judiciary system to reduce pendency of cases. Finally, we discuss the challenges in the implementation of AI in the legal field.

In the legal field, Artificial Intelligence can be applied to find digital counsel in the areas of due diligence, prediction technology, legal analytics, document automation, intellectual property and electronic billing. One such tool, which facilitates the use of artificial intelligence, is Ross Intelligence. This software has natural language search capabilities that enable lawyers to ask questions and receive information such as related case laws, recommended readings and secondary sources. Prediction Technology is a software which speculates a litigation’s probable outcome. In 2004, a group of professors from Washington University examined their algorithm’s accuracy in predicting Supreme Court judgments in 628 cases in 2002. The algorithm’s results were compared to the findings of a team of experts. It proved to be a more accurate predictor by correctly predicting 75 percent of the outcomes compared to the 59 percent of the experts’ accuracy. In 2016, JP Morgan developed an in-house legal technology tool named COIN (Contract Intelligence). It draws out 150 attributes from 12000 commercial credit agreements and contracts within few seconds. According to this organization, this equals to 36,000 hours of legal work by its lawyers.

In an interview with UK’s law Firm Slaughter and May a review of the AI tool, Luminance that is being currently used by them was taken. This tool is designed to assist with contract reviews, especially with regard to due diligence exercises during mergers and acquisitions. It was found out that the AI tool has an impact on the firm’s lawyers, who could spend more time on doing valuable work.  It was also found out that the tool fits well into the existing workflows of the firm in relation to M&A due diligence. The documents that the tool helps to review are already stored in a virtual data room; the only additional step the tool needs to take is to introduce documents into the solution itself.

India is also adopting the use of artificial intelligence in the legal field. One of India’s leading law firms Cyril Amarchand Mangaldas is incorporating artificial intelligence in its processes for contract analysis and review, in concurrence with Canadian AI assistant Kira system. This software will analyze and differentiate risky provisions in the contract. It will improve the effectiveness, accuracy and scale up the speed of the firm’s delivery model for legal service and research.

In the Indian judicial system, where a plethora of cases is pending, artificial intelligence can play a significant role to reduce the burden. A deadweight of almost 7.3 lakh cases is left pending per year. A large amount of legal research is required by advocates to argue their case. Use of AI can accelerate the speed of legal research and enhance the judicial process. In this regard, a young advocate named Karan Kalia, developed a comprehensive software program for speedy disposal of trial court cases to the Supreme Court’s E-Committee led by Justice Madan B Lokur. This software offers a trial judge with appropriate case laws instantly, while also identifying their reliability.

AI enables lawyers to get nonpareil insight into the legal realm and get legal research done within few seconds. AI can balance the expenditure required for legal research by bringing about uniformity in the quality of research. AI tools help to review only those documents which are relevant to the case, rather than requiring humans to review every document. AI can analyze data through which it can make quality predictions about the outcome of legal proceedings in a competent manner, and in certain cases, better than humans. Lawyers and law firms can swing their attention to the clients rather than spending time on legal research, making the optimum use of the constrained human resources. They can present arguments and evidence digitally, get them processed and submit them faster.

Although AI is prone to some challenges, these can be subdued with time. The major concern circumscribing AI is data protection. AI is used without any legal structure that generates the risk of information assurance and security measures. A stringent framework is needed to regulate AI to safeguard an individual’s private data and provide safety standards.  A few technical barriers will limit the implementation of AI technologies. It is difficult to construct algorithms that capture the law in a useful way. Lack of digitalization of data is also a technical constraint. Complexity of legal reasoning acts as a potential barrier to implementing effective legal technologies. However, this will be eventually rectified with continuous usage and time.

The introduction of AI in the legal sector will not substitute lawyers. In reality, technology will increase the efficiency and productivity of lawyers and not replace them. Instead, the roles of lawyers will shift, rather than decline, and become more interactive with technological applications in their field. None of the AI tools aims to replace a lawyer but they increase the authenticity and accuracy of research and enable to give a more result-oriented suggestion to the clients. As Mcafee and Bryjolfsson have pointed out, “Even in those areas where digital machines have far outstripped humans, people still have vital roles to play.”

The use of AI will manifest a new broom that sweeps clean, i.e., it will bring about far- reaching changes in the legal field. Over the next decade, the use of AI-based software is likely to increase manifold. This will lead to advancement and development in functionality present lawyering technologies such as decision engines, collaboration and communication tools, document automation, e-discovery and research tools and legal expert system the aforementioned. Trending industry concepts like big data and unstructured database will allow vendors to provide more robust performance. There will also be an influx of non-lawyer service providers who will enter the legal industry, some of whom will be wholly consumer-based, some lawyer focused and others will sell their wares to both consumers and lawyers. The future for manual labor in law looks bleak, for the legal world is gearing up to function in tandem with AI.

Read more

Data Protection: Consumer Perspectives at Facebook Design Jam

Posted on July 18, 2019July 18, 2019 by Tech Law Forum NALSAR

[Ed Note: This post is the first in a series of posts by members of TLF who attended the Facebook Design Jam in Hyderabad on 10 July 2019. It has been authored by Namratha Murugeshan, a final year student at NALSAR University of Law and member of TLF.]

Members of TLF’s Organizing Committee were invited to attend Facebook’s Data Awareness Design Jam on the 10th of July 2019. A Design Jam is an event that provides a platform for start-ups and designers to pitch and improve their products. They are typically very interactive and informative sessions that help the participants gain new perspectives about their products and learn more about compliance with law and policy. Likewise, Facebook’s event too was an excellent opportunity for us to interact with start-ups, professionals, policymakers, designers and surprisingly, quite a few lawyers too. A key takeaway for the TLF members present at the event was gaining knowledge about the consumer perspectives surrounding data protection in India. A panel discussion on the same topic was organized at the Jam. The speakers included Smriti Parsheera from NIPFP (National Institute of Public Finance and Policy), Shagufta Gupta from CUTS (Consumer Unity & Trust Society) and Prerak Mehta from Dalberg Global Development Advisors. This post is a brief on the panel discussion.

The focus of the panel discussion was on value creation for companies through increasing compliance with the transparency norm. The speeches, while ranging in perspective, centered on the idea of how compliance with law and increasing transparency aids in increasing the reputation of start-ups and companies. This comes as a particularly interesting insight, given that data protection in the eyes of the law has largely come to be viewed as the foundation upon which creative technologies need to be built. However, from the perspective of the creator, compliance with the law seems to be more of a last-minute adjustment. Compliance comes in the form of creating a product based on the needs of possible users and then learning about law and policy to tweak the product.

The panel largely focused on how consumer products such as apps and databases could be made better by creators improving their user interface for privacy aspects of the product. One of the ways in which the same could be done, it was suggested, was through the removal of blanket consent clauses. It was explained how blanket consent is a tool used by apps to access even that data of a user which is not necessary for the functioning of the app. Thereby, taking away the agency from a consumer. Adding to the same, it was suggested that the idea of purpose limitation where specific permissions need to be taken based on the use of the information should be adopted.  Further, there should be a clear mention of why the data is being collected from the user. Speaking based on the surveys conducted by CUTS on data protection, the audience was informed about the direct correlation between awareness and consumer satisfaction. Transparency helps with increasing the customer’s ease of use of a product and therefore it is productive for creators to adhere to the same.

Moving on from there, the next issue in contention was the readability of privacy policies. Length, language and the excessive use of legalese were determined to be the factors that prevent users from understanding or even reading privacy policies. These policies, it was noted, tend to be inscrutable. Further, from the point of view of the user, there is a lack of knowledge about the enforceability of privacy policies. Based on the data collected by NIPFP through surveys, we were made aware that users do not know who or how privacy policies are enforced or if they are enforced at all. As suggestions to the start-ups present, the panelists focused on the readability of these policies. An approach towards the same would be to ensure that policies are short and simple. Further, having interactive features to ensure that different users are able to find answers to their queries about the policy easily adds value to the product. Features like videos explaining privacy policies, larger font size etc. would aid in incorporating privacy by design, which in turn would automatically build trust.

The main takeaway from the panel discussion was its elaboration on how product-makers tend to think of privacy and what aspects they focus on. It was a significant change from the perspective of the law where compliance is the rule. For product-makers, however, we learnt that a product’s usability and success is positively the motivation pushing them towards innovation. The panel discussion was an excellent platform where these two seemingly divergent views were synthesized to promote the idea of privacy by design.

Read more

The Dark Web : To Regulate Or Not Regulate, That Is The Question.

Posted on December 29, 2018December 29, 2018 by Shweta Rao

[Ed Note : In an interesting read, Shweta Rao of NALSAR University of Law brings us upto speed on the debate regarding regulation of the mysterious “dark web” and provides us with a possible way to proceed as far as this hidden part of the web is concerned. ]

Human Traffickers, Whistleblowers, Pedophiles, Journalists and Lonely-Hearts Chat-room participants all find a home on the Dark Web, the underbelly of the World Wide Web that is inaccessible to the ordinary netizen.  The Dark Web is a small fraction of the Deep Web, a term it is often confused with, but the distinction between the two is important.

The Dark Web unlike the Deep Web is only accessible through anonymous servers, as distinguished from non-anonymous surface web accessing servers like Google, Bing etc. One such server is The onion router (Tor),one of the most popular servers for accessing the dark web, which derives its name from the similarity of the platform’s multilayered encryption to that of the layers of an onion. Dark Web sites also require users to enter a unique Tor address with an additional security layer of a password input. These access restrictions are what distinguish the Dark Web from the Deep Web, which may be breached into through Surface Web applications. Further, the Deep Web may, due to its discreet nature, seem to occupy a fraction of the World Wide Web, when in actuality, it is estimated to be 4000-5000 times larger than the Surface Web and hosts around 90% of the internet’s web traffic.  The Dark Web, in contrast to these figures, occupies a minuscule amount of space, with less than 45,000 Dark Web sites as recorded in 2015. Thus, the difference between Deep and Dark Web lies not in their respective content, but in the requirements and means of access to these two spaces along with the quantity of web traffic they attract.

The Dark Web has existed nearly as long as the Internet has and begun as a parallel project to the US Department of Defense’s (USDD’s) 1960s ARPANET Project. The USDD allowed the Dark Web to be accessible to the public via the Tor for it to mask its own communications. Essentially, if the Dark Web was used for only USDD communications there would be no anonymity as anyone who made their way into the system would be aware that all communications would be that of the USDD. So, by allowing the public to access it via the Tor, the USDD could use the general traffic of the Dark Web through the Tor to mask its communications under the stampede of information passing through the Tor.

While the Internet became a household name by the late 90’s the Dark Web remained obscure until 2013 when it gained infamy due to the arrest of Ross William Ulbricht ( aka the Dread Pirate Roberts) the operator of the Silk Route, marketplace for illegal goods and services.

While fully regulating a structure such as the Dark Web is a near impossible feat, this arrest has indeed pushed the previously obscure Dark Web into the spotlight, putting prosecutors and law enforcement agencies across the world on the alert. This new-found attention into the workings of the Dark Web is the junction at which the debate for regulation policies emerges.

The debate on the status of surveillance of the Dark Web broadly has two branches. The first branch, which has emerged with more force post the exposure of the Silk Route, advocates for more frequent and stricter probes into the activities of the Dark Web. In contrast, the second branch weighs increased regulation against issues of breach of privacy, which is one of the main reasons behind use of servers such as Tor.

In order to understand the reasoning behind either branch’s stance, it is essential to look at the breakup of the Dark Web and its various uses, each finding its place at different points along the spectrum having legality and illegality as its extremes.

The Dark Web, as mentioned  previously, occupies a faction of the space on the Deep Web with less than 50,000 websites currently functioning, and the number of fully active sites are even lower. Legal activities take up about 55.9% of the total space on the Dark Web whilst the rest of the space contains illegal activities such as counterfeit, child pornography, illegal arms dealing and drug pedaling amongst others. Activities such as whistleblowing and hacking given their contextual scenario-based characteristic would thus not allow themselves to be placed in one or the other category and would fall into a “grey area” of sorts.

With over 50% of the activity on the Dark Web being illegal, the call for increased regulations seems to be reasonable. However, those who are regular residents of this fraction of the internet oft differ. And this hinges on, as mentioned earlier, the issue of privacy.

Privacy has become a buzzword across the globe in the recent past with various nations having to reevaluate the rights their citizens’ information had in the midst of the boom of the data wars. From the General Data Protection Regulations (GDPR) in the EU to the Puttaswamy case in India, across the globe, the Right to Privacy has been thrown into the spotlight. Its relevance only grows with corporations both large and small mining information from users across platforms. Privacy has thus become the need of the hour, and the privacy that the Dark Web provides has been one its biggest USPs. It has harbored anyone one requiring the shield of privacy including political whistleblowers who have in the past have released vital information on violations against citizens in both tyrannical regimes as well as in democracies. Edward Snowden, whose claim to infamy was indeed surrounding privacy and surveillance, had used and still continues to use the Dark Web to ensure the protection of any communications from his location, which is a Moscow airport terminal.

In the age of #FakeNews  targeting the journalism community, the need to protect the private Tor gateway that many journalists use to protect their sensitive information seems to be of paramount importance. But despite what the creators of the Tor would like to believe, the bulk of the active traffic (which differs from the actual number of sites present) in the aforementioned “illegal” branch of the Dark Web is predominantly is that of child pornography distribution.

However, this might not bode the end of the privacy sphere as created by the Tor within the Dark Web. Using the same logic as used by the USDD, given that the increased activity of child pornography and abuse sites is a known factor, it becomes easier for authorities to single out threads of heightened activity within the Dark Web without compromising its integral cloak of privacy. This tactic has been successfully used by the American FBI in the Playpen case where it  singled out the thread of rapid activity created by a  website called Playpen which had over 200,000 active accounts that participated in the creating and viewing of child pornography. The FBI singled out the traffic for this site due to its dynamic activity and once the source of the activity was precisely determined, the FBI in a unprecedented move extracted the Playpen website from the Dark Web onto a federal server and then were able to access the IP addresses of over a 1000 users who were then prosecuted, with the creator of the site having received 30 years of jailtime. This was all done without the privacy of other Tor users being breached.

Thus, whilst Hamlet’s existential question may not have a middle ground to settle on, the status of regulations on the Dark Web could be established by using the past precedent and by using better non-invasive surveillance methods along with international cooperation, in order to respect its true intended purpose.

Read more

Dr. Usha Ramanathan’s Talk on the UIDAI Litigation

Posted on December 24, 2018December 4, 2020 by Tech Law Forum @ NALSAR

[Ed Note : The following post is based on Dr. Ramanathan’s enlightening talk  at the NALSAR University of Law, Hyderabad. It has been authored by Karthik Subramaniam and Yashasvi Raj, first year students of the aforementioned university, who,  in a slightly longer but informative read aptly put forth Dr. Ramanathan’s views on the Aadhar issue and its judicial journey.

Dr. Usha Ramanathan, an internationally recognized legal expert, is currently research fellow at the Centre for the Study of Developing Societies and professor at the Indian Law Institute. Since 2009, she has consistently brought forth the loopholes in the Aadhar project, exposing its shoddy functioning.]

Read more

The Supreme Court of India unanimously holds in a 9-0 verdict that Privacy is a Fundamental Right

Posted on August 24, 2017August 24, 2017 by vanlalvena

The Supreme Court of India in a welcome decision today held that Privacy is a Fundamental Right under Article 21 in Part III of the Constitution in a unanimous 9-0 decision.

Detailed comments will come soon.

You can find the judgment here, or alternatively here.

 

Read more

Cashless Societies: Causes for Concern

Posted on January 21, 2017August 11, 2017 by vanlalvena

cashless_society-infographic

 Source: CNN

A cashless society is no longer a myth but an impending reality, one of the causes for concern is the issue of privacy which this article deals with.

The idea of a cashless society, i.e., ‘a civilization holding money, but without its most distinctive material representation – cash’, is said to have originated in the late 1960s. The transition to go cashless had been slow and steady, but it is now increasing at a rapid pace this last decade. As technology evolves, the shift from a cash reliant to a cashless society is becoming more apparent. At least in the urban society, using ‘contactless payments’ or ‘non-cash money’ is not unheard of. It has been reported that not only did the first debit card possibly hit the markets in the mid-1960s but that in 1990, debit cards were used in about 300 million transactions, showing the rise of the same in today’s society. Before welcoming this change with open arms, we must take care that we do not ignore the security and privacy concerns, some of which will be addressed in this article.

As we are transitioning from a cash-reliant society to a [quasi] cashless society, there are some fears about phones being hacked or stolen, or reliance placed on devices which require batteries or internet – what if either is not available? However, conversely, our cash or wallets could be stolen, destroyed in a matter of seconds, could be misplaced, etc. The only difference is the medium of transaction.

Fear is a factor which inhibits change, however these fears are usually not unfounded. In the year 2014, Target, the second-largest discount store retailer in the United States was hacked and up to 70 million customers were hit by a data breach. Furthermore, 2 years later, it was reported that roughly 3.2 million debit cards were compromised in India, affecting several banks such as SBI, ICICI, HDFC, etc.

Nevertheless, as earlier pointed out, just as financial details present online can be stolen, so can paper money. With each transaction taking place online, the fears of online fraud are present, however Guri Melby of Liberal (Venstre) party noted, “The opportunity for crime and fraud does not depend on what type of payment methods we have in society.” A mere shift in the means of trade will not eliminate such crimes. It is here that I must clarify that a cashless society could be in various forms and degrees, be it debit/credit cards, NFC payments, digital currencies such as bitcoin or even mobile transactions such as M-Pesa.

Bruce Schneier, cyber security expert and author of best seller, Data and Goliath, notes that the importance of privacy lies in protection from abuse of power. A hegemony of the authorities over our information – details [and means] of our every transaction – provides absolute power to the authorities and thus a much higher scope for abuse. Daniel Solove, further notes that abuse of power by the Government could lead to distortion of data; however, even if we believe the government to be benevolent, we must consider that data breaches and hack could (and do) occur.

Cash brings with it the double-edged sword of an anonymity that digital transactions do not provide. A completely cashless society might seem attractive in that each transaction can be traced and therefore possibly result in reduction of tax evasion or illicit and illegal activities; however, though that crime might cease to exist in that form, it could always evolve and manifest itself in some other form online.

One of the concerns raised in this regard is that the government could indefinitely hold or be in possession of our transaction history. This seems to be an innocent trade-off for the ease and convenience it provides. The issue that arises however, as Domagoj Sajter notes, is that every single citizen has become a potential criminal and a terrorist to the government, worthy of continuous and perpetual monitoring. The citizens become latent culprits whose guilt is implied, only waiting to be recorded and proven. The principle of innocent till proven guilty vanishes in the mind of the government.

Furthermore, a completely cashless society places power with the Government with no checks and balances of the same. Advanced technology could disable funding of mass actions, extensive protests and large-scale civil disobediences, all of which are important traits of democratic processes. It is pertinent to remember that Martin Luther King Jr. was tracked by the FBI. Providing the government with more ease in curtailing democratic processes leads to a more autocratic governance.

Consider the following: an individual finds out that the Government or one of its agencies is committing a crime against humanity, and she reports it to the public. Not only could her personal life be excavated to find faults but any support that she would receive in terms of money (in a cashless society) could possibly be blocked by the Government. Minor faults could be listed and propaganda could be spread to discredit her point or deviate the masses’ attention. By controlling the economy, they could wring the arms of the media and force them to not focus on or to ignore the issues raised by her.

Michael Snyder also raises an important point about erasure of autonomy in a cashless society, “Just imagine a world where you could not buy, sell, get a job or open a bank account without participating in ‘the system’”. It need not start with forcing people to opt-in, simply providing benefits in some form could indirectly give people no choice but to opt-in. The Supreme Court of India has noted multiple times that the Aadhar Card cannot be made compulsory (a biometric identity card). However, the Aadhar card has been made mandatory to avail EPF Pension Schemes, LPG Benefits and even for IIT JEE 2017. The Government of India is even mulling making Aadhaar number mandatory for filing of income tax (I-T) and link all bank accounts to the unique identity number by the end of this financial year. The government is concurrently working on developing a common mobile phone app that can be used by shopkeepers and merchants for receiving Aadhaar-enabled payments, bypassing credit and debit cards and further moving to cashless transactions. The Aadhaar-enabled payment system (AEPS) is a biometric way of making payments, using only the fingerprint linked to Aadhaar. These are all part of the measures taken by the Indian government to brute force the Indian economy into a cashless form.

Policing of the citizen is not a purely hypothetical scenario; it has already taken place in the past. In 2010, a blockade was imposed by Bank of America, VISA, MasterCard and PayPal on WikiLeaks. In 2014, Eden Alexander started a crowdfunding campaign hoping to cover her medical expenses, but later, the campaign was shut down and the payments were frozen; the cause being that she was a porn actress. We must also take into account the empowerment that cash provides; consider an individual saving cash from their alcoholic or abusive spouse, or the individual who stuffs spare notes under her mattress for years because it gives her a sense of autonomy. We should take care that in seeking development, we do not disempower the downtrodden, but lift them up with us.

The idea of a cashless society is no longer strange, with multiple corporations and even countries having expressed their interest in going cashless. Harvard economist and former chief economist of the IMF, Kenneth Rogoff in his Case Against Cash argues that a less-cash society [in contradistinction to a cash-less society] could possibly reduce economic crime, he suggests in the same article that this could be executed by a gradual phasing out of larger notes. A cashless or less-cash society is inevitable. In Sweden, cash transactions made up barely 2% of the value of all payments made. The question thus is not about when [it will happen] but what are the safeguards we set up to protect our rights.

For further reading:

1] Melissa Farmer: Data Security In A Cashless Society

https://www.academia.edu/12799515/Data_Security_In_A_Cashless_Society

2] David Naylor, Matthew K. Mukerjee and Peter Steenkiste: Balancing Accountability and Privacy in the Network

https://www.cs.cmu.edu/~dnaylor/APIP.pdf

3] Who would actually benefit from a Cashless Society?

https://geopolitics.co/2016/01/30/who-would-benefit-from-a-cashless-society/

4] Anne Bouverot: Banking the unbanked: The mobile money revolution

http://edition.cnn.com/2014/11/06/opinion/banking-the-unbanked-mobile-money/index.html

5] Kenneth Rogoff: Costs and benefits to phasing out paper currency

http://scholar.harvard.edu/files/rogoff/files/c13431.pdf

Read more

GOOGLE PIXEL AND ITS ARTIFICIAL INTELLIGENCE: The New Big Daddy Watching You?

Posted on October 20, 2016 by Balaji Subramanian

Ed. Note: This post by Sayan Bhattacharya is a part of the TLF Editorial Board Test 2016.

Google launched its first smartphone series called Pixel some time earlier this month. The major shift from being software producer to being both hardware and software producer was a calculated change in policy to take a direct dig at Apple’s hardware throne.

Apple stood as undisputed kings in terms of design and the meticulously designed software which ran on them, perfecting user experience with highest precision. Google on the other hand was the undisputed king of software and search engines, comprising of much higher software offerings than any other. Even the most diehard fans of iPhones spent most of their time on their devices using Google products. The changeover was thus a direct policy measure to cut through Apple’s base in hardware design but providing an alternative with Google’s exclusive product range.

On the surface, the launch seem to be all about glittery issues surrounding the inherent competition between Google and Apple, but the media, customers and the makers of our privacy law often tend to ignore the bigger picture being entangled in the mesh of technology. One of the major components of Google’s cutting edge technology over iPhones is its Artificial Intelligence which promotes active data mining. The absence and presence of privacy norms is what distinguishes the new features of Google from the existing features in Apple devices. The assumption on part of Google is that its customers are willing to give up some amount of privacy in order to make life easier and the assumption on part of iPhone is that customers value their privacy more than anything.

 

The latest Artificial Intelligence in Pixel allows the software to read mails, text messages, calendars. When Google’s AI magically delivers you the answer to the question you asked, it is a case of data mining. It is not against the law too, because technically on paper you have given Google certain permissions by not reading the fine print or skimming through it, which allows it to read through your chats, mails, location history and browsing and what not for it to give you some magical results. So the argument here is that it mostly not a free consent that people are providing due to lack of important information while making the same choice.

 

The major shift in terms of technology in the new AI that Google has developed in Pixel is in terms of its ability to actively read and understand the context of an act or a conversation. So for example if you are on either of Google Allo or Google Home and chatting about going for a dinner with your family at a selected time, you can be sure to expect a reminder about the same along with reviews about the restaurant and even a direct link to book an Uber ride. This is because the AI feature reads your conversations, figures out context and links you to your needs over the web.

“Adding to that, the very Google Allo introduced in order to challenge authority of messaging applications like WhatsApp, Snapchat and Messenger is not end to end encrypted like all of these messaging applications are until you move into incognito mode. However, the Incognito mode within the Google Allo is only an optional feature, instead of being a default setting like in secure chat apps such as Apple’s iMessage, Facebook Messenger, and WhatsApp. In consequence, Allo’s privacy and security got heavily criticized.”

“NSA whistleblower Edward Snowden has criticised Google Allo app on Twitter, and said Google’s decision to disable end-to-end encryption was dangerous. He asked people to avoid using the app, and his tweet has been re-tweeted over 8000 times on the site.”

The problem essentially with this kind of a feature is the fact that it prioritises data mining for ease of access over consumer privacy. The very fact that now privacy of data is an option instead of the norm is what leads to questioning the ethics of data mining however easier it makes one’s life. The fact that a third party is able to store your conversations, read it by actively understanding its context and finally applying the same to aid future actions on your device is what is astounding in this regard.

 

In another instance if you back up your photos to Google photos, the Google Assistant is capable of recognizing what’s there in the photo using its computer vision wherein it can understand when the same was taken and who all are there in it. Thus the Google AI goes to the extent of not just mining your data but also linking the data excavated to that of other user’s data which Google has excavated through its software. The ultimate end goal is to link the entirity of data collected to create a form of network which is omnipresent but can’t be seen. The question doesn’t arise out of the same networking but out of the means of achieving the same. The data is thus being excavated without a free consent and is being linked with external third party data without prior permission.

Another huge concern surrounding this huge data storage is with government snooping through data packet inspections which exist already in network connections. A switch to Google Pixel means a switch to an almost completely internet run software which further increases chances of breach of privacy.

 

Google aims at making its artificial intelligence the next big thing post its position in world of search engines and software. It aims to make its customers switch from a mobile first world to an AI first world. But the underlying assumption is that the same can be done at the cost of user privacy.

 

Read more

Encryption and the extent of privacy

Posted on September 24, 2016 by Balaji Subramanian

Ed. Note.: This post, by Benjamin Vanlalvena, is a part of the NALSAR Tech Law Forum Editorial Test 2016.

A background of the issue

Read more
  • 1
  • 2
  • Next

Subscribe

Recent Posts

  • Analisis Faktor-Faktor yang Berhubungan dengan Kejadian Ketuban Pecah Dini di RSUD Lamaddukelleng Kabupaten Wajo
  • The Fate of Section 230 vis-a-vis Gonzalez v. Google: A Case of Looming Legal Liability
  • Paid News Conundrum – Right to fair dealing infringed?
  • Chronicles of AI: Blurred Lines of Legality and Artists’ Right To Sue in Prospect of AI Copyright Infringement
  • Dali v. Dall-E: The Emerging Trend of AI-generated Art
  • BBC Documentary Ban: Yet Another Example of the Government’s Abuse of its Emergency Powers
  • A Game Not Played Well: A Critical Analysis of The Draft Amendment to the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021
  • The Conundrum over the legal status of search engines in India: Whether they are Significant Social Media Intermediaries under IT Rules, 2021? (Part II)
  • The Conundrum over the legal status of search engines in India: Whether they are Significant Social Media Intermediaries under IT Rules, 2021? (Part I)
  • Lawtomation: ChatGPT and the Legal Industry (Part II)

Categories

  • 101s
  • 3D Printing
  • Aadhar
  • Account Aggregators
  • Antitrust
  • Artificial Intelligence
  • Bitcoins
  • Blockchain
  • Blog Series
  • Bots
  • Broadcasting
  • Censorship
  • Collaboration with r – TLP
  • Convergence
  • Copyright
  • Criminal Law
  • Cryptocurrency
  • Data Protection
  • Digital Piracy
  • E-Commerce
  • Editors' Picks
  • Evidence
  • Feminist Perspectives
  • Finance
  • Freedom of Speech
  • GDPR
  • Insurance
  • Intellectual Property
  • Intermediary Liability
  • Internet Broadcasting
  • Internet Freedoms
  • Internet Governance
  • Internet Jurisdiction
  • Internet of Things
  • Internet Security
  • Internet Shutdowns
  • Labour
  • Licensing
  • Media Law
  • Medical Research
  • Network Neutrality
  • Newsletter
  • Online Gaming
  • Open Access
  • Open Source
  • Others
  • OTT
  • Personal Data Protection Bill
  • Press Notes
  • Privacy
  • Recent News
  • Regulation
  • Right to be Forgotten
  • Right to Privacy
  • Right to Privacy
  • Social Media
  • Surveillance
  • Taxation
  • Technology
  • TLF Ed Board Test 2018-2019
  • TLF Editorial Board Test 2016
  • TLF Editorial Board Test 2019-2020
  • TLF Editorial Board Test 2020-2021
  • TLF Editorial Board Test 2021-2022
  • TLF Explainers
  • TLF Updates
  • Uncategorized
  • Virtual Reality

Tags

AI Amazon Antitrust Artificial Intelligence Chilling Effect Comparative Competition Copyright copyright act Criminal Law Cryptocurrency data data protection Data Retention e-commerce European Union Facebook facial recognition financial information Freedom of Speech Google India Intellectual Property Intermediaries Intermediary Liability internet Internet Regulation Internet Rights IPR Media Law News Newsletter OTT Privacy RBI Regulation Right to Privacy Social Media Surveillance technology The Future of Tech TRAI Twitter Uber WhatsApp

Meta

  • Log in
  • Entries feed
  • Comments feed
  • WordPress.org
best online casino in india
© 2025 Tech Law Forum @ NALSAR | Powered by Minimalist Blog WordPress Theme