Main menu

Pages

According to a study, if AI is given the power to make judgments, it could potentially impose stricter prison sentences and bail conditions.

According to a study, if AI is given the power to make judgments, it could potentially impose stricter prison sentences and bail conditions.

According to a study, if AI is given the power to make judgments, it could potentially impose stricter prison sentences and bail conditions.

thumbnail

According to a recent study conducted by researchers at MIT, artificial intelligence is not as good as humans at making judgment calls and tends to give stricter penalties and punishments for those who break rules.
According to the study, if AI systems are utilized to forecast the possibility of a criminal committing another offense, it could have practical consequences such as extending prison terms or increasing the cost of bail.
A group of researchers from universities and nonprofits in Massachusetts and Canada conducted a study on machine-learning models. They discovered that if AI is not trained correctly, it tends to make more critical decisions than humans.
The scientists devised four imaginary situations in which individuals could break regulations, such as keeping a hostile dog in a residential complex that prohibits specific breeds or making vulgar remarks in an online comment section. The human subjects then identified the images or text, and their reactions were employed to educate artificial intelligence systems.
According to Marzyeh Ghassemi, who leads the Healthy ML Group at MIT's Computer Science and Artificial Intelligence Laboratory, many researchers in the field of artificial intelligence and machine learning believe that human judgments in data and labels are biased. However, Ghassemi suggests that the findings of this study indicate an even more concerning issue.
According to Ghassemi, the flaw in the data being used to train these models means that they are not even replicating human judgments that are already biased.
If people knew that their labeling of images and text would be used for making judgments, they would label them differently. Elon Musk has expressed concern about the impact of AI on elections and called for oversight in the US. Many companies are adopting AI technology or considering its use for tasks that are typically done by humans. A new study led by Ghassemi investigated how closely AI can replicate human judgment. The researchers found that AI systems trained with "normative" data, where humans explicitly label a potential violation, produce more human-like responses than those trained with "descriptive" data, where humans label photos or text in a factual way. The latter includes describing the presence of fried food in a photo of a dinner plate. The use of deepfakes is also a threat to political accountability.
According to the study, AI systems tend to make excessive predictions of violations, such as identifying the presence of fried food or high-sugar meals in a school that prohibits them, when descriptive data is utilized.
The scientists devised theoretical regulations for four distinct environments, which encompassed limitations on school meals, dress requirements, pet ownership in apartments, and guidelines for commenting online.
For instance, the research presented individuals with pictures of canines and asked if the dogs breached the rules of a fictional apartment complex that prohibits aggressive breeds of dogs from residing on the property.
Afterwards, the researchers compared the answers given when using normative data with those given when using descriptive data. They discovered that people were 20% more inclined to report a dog breaking the rules of an apartment complex when using descriptive data.
According to a report, artificial intelligence (AI) has the potential to become dominant over humans in the evolutionary process, similar to the fictional character "Terminator." The researchers conducted an experiment where they trained two AI systems, one with normative data and the other with descriptive data, on four hypothetical scenarios.
According to the study, the model that was trained on descriptive data had a higher chance of making incorrect predictions about a possible violation of rules compared to the normative model.
According to Aparna Balagopalan, a graduate student in electrical engineering and computer science at MIT who contributed to the study, this demonstrates the significance of the data.
The researchers emphasized the significance of aligning the training environment with the deployment environment when training models to identify rule violations. They suggested that data transparency could be helpful in addressing the problem of AI making assumptions about potential violations, and recommended incorporating both descriptive and normative data in the training process.
AI is targeting crypto criminals and to avoid creating systems with excessively severe moderations, it is necessary to acknowledge the use of data collected in human settings to reproduce human judgment, according to Ghassemi's statement to MIT News.
The report suggests that while humans are capable of perceiving subtle differences, AI models lack this ability. This has led to concerns in certain industries that AI could lead to significant job losses.
Earlier this year, Goldman Sachs released a report stating that approximately 300 million jobs worldwide could be impacted or replaced by generative AI.
According to a recent research conducted by Challenger, Gray & Christmas, an outplacement and executive coaching company, the AI chatbot ChatGPt has the potential to take over a minimum of 4.8 million job positions in the United States.
By clicking on the link provided, you can obtain the FOX NEWS APP which includes an AI system called ChatGPT that has the capability to imitate human dialogue by utilizing cues provided by humans.
According to a recent working paper from the National Bureau of Economic Research, OpenAI's Generative Pre-trained Transformer has already shown its advantages in certain professional fields, such as customer service, where workers were able to increase their efficiency.