Risks of Using AI Tools: Privacy and Security Challenges

Risks of Using AI Tools - Privacy and Security Challenges

Artificial intelligence (AI) generative robots like ChatGPT and Copilot are rapidly advancing, raising concerns about privacy and security, especially when these bots are used in the workplace.


Additionally, there are worries about other AI tools besides chatbots. For instance, Microsoft's new tool, Recall, can store screenshots of a computer screen every few seconds. Many people are concerned this feature poses a significant security risk, with cybersecurity experts providing evidence that activating this feature can make computers easier to hack, exposing any stored information.


Concerns are also growing about the new ChatGPT app for Mac computers, which can take screenshots, potentially capturing sensitive data.


In another development, the U.S. House of Representatives banned Microsoft’s Copilot for employees after the Cybersecurity Office deemed it a risk due to the potential for House data to be leaked to external cloud services.


Moreover, there are warnings that using Copilot in Microsoft 365 work applications could endanger sensitive data. 

Google had to make adjustments to its new search feature, AI Overviews, after users reported misleading and strange responses to their queries.


These issues highlight the multiple challenges of using AI in the workplace. 

Below, we outline these challenges and the steps companies and employees can take to protect privacy and security.


First: What are the challenges of using AI at work?

The biggest challenge for those using generative AI at work is the risk of exposing sensitive company or personal data, as most generative AI systems store vast amounts of information to train their language models.


Another risk is the AI systems themselves being targeted. If malicious actors gain access to the large language model (LLM) powering these AI tools, they could extract sensitive data, introduce incorrect or misleading outputs, or spread malware.


Second: How can you maintain privacy and security when using AI at work?

Generative AI poses several potential risks, but there are steps companies and employees can take to enhance privacy and security, including:


  • Avoid entering any confidential information about yourself or your company when prompting a public AI tool like ChatGPT or Gemini. For example, write “suggest a model for organizing a specific budget” instead of “my budget is...suggest how to spend on a project...”
  • When using AI tools for research, verify the information and ask the bot to provide references and links to sources. If you ask the AI tool to write code, thoroughly review it before use.
  • Adjust the AI tool's settings to prevent your data from being used to train its language models. Some AI tool developers offer settings to restrict data usage, with OpenAI being a prominent example.


Third: What role do AI developers play in ensuring user privacy?

Many companies developing AI models and integrating generative AI into their products claim to prioritize user privacy. Microsoft, for instance, emphasizes security and privacy in its Recall feature, allowing users to control this feature through their computer settings.


Google asserts that it provides users with ways to control their data and does not use private user information for ads. OpenAI offers several ways for users to prevent the bot from storing and using their data to train language models.

#buttons=(Accept !) #days=(30)

Our website uses cookies to improve your experience. Privacy Policy.
Accept !