The user-friendly chatbot ChatGPT was only launched six months ago but it seems like it has always been around. Two months after it was launched, the chatbot had already attracted 100 million active users, faster than any other tech platform ever. For the sake of comparison, it took Instagram several years to reach that mark.
Many of the users decided to take the AI engine with them to work to help them with a range of tasks like composing emails, summarizing meetings, and even checking over their computer code. Major companies have recently recognized the advantages contained in the language processing engine and have begun providing employees with their advanced premium version, which is for paid subscribers only, in order to improve their productivity, or just give the office a leading edge image.
The launch of ChatGPT-4 Plus in March for a $20 monthly subscription has accelerated the trend with a service that offers a higher level of precision, swifter response and les errors that could embarrass many companies.
Israeli software companies like Lightricks, Wix, Lemonade and Taboola have begun paying for ChatGPT-4 subscriptions. Some of the international tech giants have not missed the trend either, although some have expressed reservations about the move. Microsoft, the main investor behind OpenAI, Sam Altman's AI company that launched ChatGPT, was one of the first to provide a premium subscription to its employees free of charge. After all, it is the main beneficiary of the trend.
Many companies seem to avoid sharing whether they allow employees to use ChatGPT. Microsoft and Meta preferred not to respond to "Globes" inquiries on the subject with a lack of transparency that characterized other tech giants we contacted. And for good reason: the tech giants are offering AI engines to the masses, but when it comes to their own work processes, they are extremely conservative.
"We will open an account for every employee that wants ChatGPT"
Improved performance and optimization of the work output of companies has been made possible thanks to AI tools in general and ChatGPT in particular, and is evident across all departments and positions, from development teams, to human resources and sales staff.
Udi Menkes, an entrepreneur and AI consultant who helps companies integrate technologies into products and their internal enterprise culture tell "Globes," "The main uses in organizations, apart from introducing language models as additions to their products, are the creation of marketing content, categorization and documentation for products, writing and completing code by developers, summaries of content, feedback from customers and more."
One of the improvements recently integrated into the chatbot that has made it relevant for organizations is their4 connection with the Internet. Menkes says, "The connection adds context to the model with up-to-date information and opens up a new area of possibilities."
Several companies provide their employees with tools other than ChatGPT-4. Taboola, for example, gives its employees subscriptions to a variety of image generators from text: Midjourney, Stable Diffusion, Dali and the GenAI bot. Starting this week, Taboola will also give its employees BARD, Google's equivalent to ChatGPT.
Jerusalem-based Lightricks, which develops applications for processing video images on mobile and is known for the selfie app Facetune, encourages its employees to use ChatGPT-4. Unlike many companies around the world, VP creative marketing Omer Rabinovitz is not afraid to say that they make sure that the engine is accessible to all employees in all departments. "Any employee of the company who wants to, will have a ChatGPT account opened for them," he tells Globes. "We hold workshops and explain to employees how to maximize their abilities in using the technology, and get better results."
According to Rabinovitz, use of tools immeasurably improves the efficiency of the company's employees and its products. "We save time and the results are more focused on what we want to offer the users. We get better ideas in a significantly shorter time. Besides streamlining the work, the employees report that the engine gives them more power."
Managers are concerned that sensitive data will be leaked
Not all tech companies are enthusiastic about introducing language engines into their work environment. There are companies that prohibit use of ChatGPT-4 out of concern for privacy and data security. One of the most prominent of them is Amazon, which restricts use of the chatbot by employees and warns against entering sensitive data into it. Samsung, the world's largest smartphone maker, has also expressed concerns about using ChatGPT, after it was reported that managers were furious with employees who entered sensitive data into the chatbot. But while banning work with ChatGPT, Samsung has also reported that it is developing its own AI tool.
In practice, all data put into in ChatGPT ends up in the hands of one company - OpenAI. This is essential to the mechanism: data collection is important in this technology due to the fact that it is used to train and teach the chatbot. In fact, the very use of the tool constitutes the initial and only consent required. If a security breach were to occur that would allow access to the OpenAI database, sensitive information could be misused. In addition to this, and despite the many advantages, it is difficult to trust the chatbot when it comes to reliability. It can provide incorrect information but formulated in a convincing and professional manner, and if the content is not controlled, such information may be distributed among employees and harm the functioning of the company.
The suspicion of the tech giants is also ironically reflected in Google, which recently launched BARD - a natural language engine that competes with Microsoft's ChatGPT. Google forbids use of BARD as well as ChatGPT. The reason for this is that these tools not only generate text or code, but also learn from their users and thus constitute an opening for leakage of sensitive data outside the company.
Instead, Google is developing an internal language model dedicated to employees called Duckie, which is not open to users outside of the company. The tool, which is in advanced stages of development, provides employees with an answer to any work-related question. For summarizing texts or improving writing, Google employees use an internal product developed as an add-on to Google Docs software and many other Google developments whose purpose is to prevent the use of competitors' tools.
Companies work independently to prevent data leakage
Until there is institutionalized regulation on the use of these AI tools that are continually being improved at a frantic pace, companies are working independently to prevent data leakage.
"This is a field that is running ahead really fast and we follow the changes," says Rabinovitz. "It is critical for us not to do anything that could be gray and illegal. If there is something that requires examination, we immediately inquire with the legal department."
Meanwhile, ChatGPT also allows users to protect themselves in order to feel more comfortable using the tool. For example, in the ChatGPT settings there is an option to block the flow of data and the ability to learn from it. But even after changing these settings, the data may still be sent to the database.
Other companies solve the problem by building a private chatbot on top of the existing infrastructure. One of them is Microsoft which launched a service that allows the creation of a private space on the engine. "In this way, Microsoft gives companies the peace of mind and the sense of security that their data does not have to go outside the company," says Menkes.
Published by Globes, Israel business news - en.globes.co.il - on May 18, 2023.
© Copyright of Globes Publisher Itonut (1983) Ltd., 2023.