Melvin Kranzberg: Technology is neither good nor bad; nor is it neutral.
We converse with our computers and phones daily.Our ability to interact with them has been revolutionized by Large LanguageModels (LLMs), which enable us to ask questions and get responses in natural language. It is almost like talking to a very clever colleague; we treat LLMs like sources of information. This has opened the door for a variety of AI-powered tools, such as smartphones’ ability to translate writing visible in images into any language. Not sure what your T-Shirt is saying? Just take a picture, highlight the text, and your phone will automatically translate it for you. Sick!
ChatGPT has become a well-known and liked option among these LLMs. The significant demand for GPT and comparable how widely they are used; everybody and their grandma wants to“talk” to them. These almost brand-new services are emerging and applied in situations where they have never been used. We get the craze. But is it actually Google, but better?
Deep machine learning is an element of Artificial Intelligence that trains programs by using massive amounts of data. This way, LLMs can recognize patterns. The ImageNet collection, which is used to train models that work with images, as of 2021, contains over 14 million images. Deep machine learning is based on the idea of using math and data inputs to produce probabilistic outcomes, despite the enormous amounts of data used. For example, language models are created to foretell the most probable word, phrase, or sentence given a text input.
GPT-3 makes use of the petabyte-sized CommonCrawl Database, which is thousands of terabytes (1 petabyte = 1,000 terabytes.)
Wondering where can you find a real-daily-life example of this? What do you use to type your little WhatsUp messages? Mobile devices that learn from our typing habits may have already come into play for you. ChatGPT, on the other hand, is more sophisticated and can determine the likelihood that a specific word or phrase will occur next. It may not always fully “understand” our requests, but it analyzes the text and returns a string of words or sentences that, in its estimation, are most likely.
But why do we write about it? What is so important about it? Well…
It is crucial to keep in mind that there are expenses and risks associated with training machine learning algorithms. Even though the advantages of LLMs are obvious, we have to make sure that these models are created and used in a way that is morally righteous and responsible, taking into consideration the possibility of bias and other unintended consequences.
Let’s mention some GPT-related social concerns. OKO.press’s article on GPT1 is a comprehensive master post concerning the issues and what GPT is. We can think about things like: is removing this system equivalent to killing an animal in an experiment if it can truly learn? Although it may sound absurd, the terminology used to personify these systems in descriptions can also be absurd.
As we have seen with Tesla’s autopilot mode2, automated systems that depend on machine learning models are already harmful. A mechanism has been put in place in Brazil3, for instance, to help with judicial sentencing. However, we might not even be aware that we were assessed using a murky model. Want to get into college? Imagine that not a single person is reading your application. Just strict data-trained AI.
Another problem? Even these algorithms are susceptible to bias. When you type in “professor” or “doctor” in Craiyon, for example, men show up, but when you type “nurse” only women do. Even Google Translate4 goes by unintended bias. Despite the filters that OpenAI has added to GPT, it is still possible to get a racially or sexist answer if you manage to get past them (news flash: it is not that hard.) These models’ training data might also completely omit some groups.The information collected from the Internet also exhibits the biases of theInternet.
Additionally, in our recent blog post5, we mentioned the problem of using artists’ works without their consent. The public datasets used to train these models might also contain images that are part of the medical records of private people6.
These models can also worsen misinformation because GPT is capable of coming up with answers, which is another issue.
Deep machine learning and ChatGPT specifically, in general, provide a potent instrument for the generation and processing of natural language. It can be incredibly helpful as long as you remember to check the sources and are aware of the unintended made-up answers. But remember about the possible drawbacks such as prejudice. Deep machine learning models should be approached cautiously, as with any powerful technology, and their possible effects on society should be taken into account.
Oops! Something went wrong while submitting the form