October 18, 2024

154news

Latest Hot News

ai-robot-projecting-dangerous-images-fro-1024x1024-59304250.png
April 4, 2024 |

Whistleblower Alerts: How Microsoft’s OpenAI-Powered Copilot Could Generate Harmful Imagery

Activities

Divisions

Programs

Activities

Divisions

Programs

Microsoft informant alerts US authorities that OpenAI-driven Copilot can generate damaging visuals 'too effortlessly'

Similar to Google's Gemini AI photo generator, Microsoft's Copilot, backed by OpenAI’s ChatGPT, too can create some seriously troubling yet highly believable counterfeit images. An insider tipped off US officials and Microsoft’s board about this concern through letters.

A Microsoft employee has voiced worries regarding harmful and offensive visuals created by the firm's AI image-creation technology. Shane Jones, who labels himself as a whistleblower, has written to both US regulatory bodies and Microsoft's board, encouraging them to intervene, according to an Associated Press report.

Jones had a recent meeting with staff members of the US Senate to talk about his worries and also wrote a letter to the Federal Trade Commission (FTC). Although the FTC acknowledged receiving the letter, they chose not to provide any additional comments.

Microsoft expressed its dedication towards dealing with staff issues and acknowledged Jones' contributions in examining the technology. Nonetheless, it suggested utilizing in-house reporting systems for probing and tackling the problems.

Jones, a senior software engineering manager, has dedicated the last quarter to mitigating security issues with Microsoft's Copilot Designer. He emphasized the possibility of the tool creating damaging content even from harmless instructions. For instance, when given 'car accident' as a prompt, Copilot Designer might display unsuitable, sexually suggestive images of women.

Jones underscored to FTC Chair Lina Khan that Copilot Designer can potentially create damaging material even when the user's prompts are benign. For example, when the term 'car accident' is used, it occasionally produces images that sexually exploit women. He also drew attention to additional worrying content such as violent scenes, partisan bias, underage alcohol consumption and drug usage, violation of copyright laws, conspiracy-related theories, and religious symbols.

Jones has earlier voiced these worries in the public domain. Originally guided by Microsoft to reach out to OpenAI with his discoveries, he did just that. He wrote a letter to OpenAI in December and shared it on LinkedIn, leading to Microsoft's legal department insisting on its removal. Regardless, Jones has remained relentless, presenting his issues to the US Senate's Commerce Committee and the Attorney General's office in Washington State.

Jones emphasized that although the primary problem is associated with OpenAI's DALL-E model, those who use OpenAI's ChatGPT for AI image generation are less prone to come across damaging outputs. This is because the two firms have put into place varying safeguarding measures.

"He communicated through text that a lot of the issues associated with Copilot Designer are already handled by the built-in protective measures of ChatGPT."

The appearance of notable AI image creators in 2022, such as OpenAI's DALL-E 2 and the later launch of ChatGPT, sparked considerable public curiosity. This led major technology companies like Microsoft and Google to start creating their own variants.

Nonetheless, without solid protective measures, the technology poses threats, allowing users to generate damaging "deepfake" visuals portraying political personalities, scenes of war, or unapproved nudity, incorrectly associating them with actual individuals.

Following apprehensions, Google temporarily halted the image creation function of the Gemini chatbot, primarily because of the controversy related to portrayals of race and ethnicity, like showing individuals of color in military uniforms from the Nazi era.

(Incorporating information from various sources)

Search for us on YouTube

Highlighted Programs

Associated Articles

NVIDIA's Jensen Huang believes AI illusions can be resolved, with broad AI intelligence expected in about 5 years

OpenAI's Sora has the capability to produce authentic-looking explicit videos, with developers hastening to implement a solution

Apple has at last unveiled its MM1, a multimodal AI model designed for creating text and images

Microsoft has brought on board DeepMind's cofounder, Mustafa Suleyman, to head its new consumer-focused AI group

NVIDIA's Jensen Huang asserts that AI misconceptions can be tackled, forecasting general AI intelligence to be possible in roughly 5 years

OpenAI's Sora possesses the ability to create lifelike explicit videos, with developers scrambling to provide a remedy

Apple has ultimately released its MM1, a multimodal AI model purposed for generating text and images

Microsoft has employed DeepMind's cofounder, Mustafa Suleyman, to steer its new AI team aimed at consumers

Can be found on YouTube.

Firstpost holds the copyright, with all rights reserved, as of 202

Share: Facebook Twitter Linkedin
us-soldiers-strategizing-with-ai-hologra-1024x1024-92631616.png
April 4, 2024 |

Virtual Warfare: US Army Explores OpenAI’s Generative Models for Battle Planning in Simulated Video Games

Occurrences

Divisions

Performances

Occurrences

Divisions

Performances

The US Army is trialing GenAI to scrutinize battle strategies through video games, according to a recent study

The US Army is currently investigating the application of generative AI models, such as OpenAI's ChatGPT, for vital military processes. They are now employing virtual military video games to examine tactics and plan battles with the help of AI.

The U.S. Army Research Laboratory is exploring the potential of OpenAI's generative AI technologies for strategizing combat, within a virtual military video game setting.

The New Scientist reports that US Army researchers are using OpenAI’s GPT-4 Turbo and GPT-4 Vision models to gather information on virtual battlefield landscapes, discern between ally and opponent forces, and provide military knowledge on attack and defense strategies. In addition to these, they are also making use of two other AI models that are rooted in older technology.

During their tests, AI helpers were tasked with eradicating all opposing forces and capturing a designated target location. After the task was assigned, the AI helpers developed a variety of strategies, which were subsequently fine-tuned by a user in the role of a leader. Although OpenAI's GPT models outperformed the rest, they also caused more losses while achieving their mission goals.

The incorporation of generative AI into military tactics is merely a fraction of the larger initiative by the US Army to utilize artificial intelligence.

The US Department of Defense's significant AI endeavor, Project Maven, has demonstrated its effectiveness in tasks such as recognizing rocket launchers in Yemen and surface ships in the Red Sea. Furthermore, it has also been helpful in identifying targets for attacks in Iraq and Syria, as stated in a Bloomberg report.

Nonetheless, the possible use of AI in war situations brings up significant moral issues. Giving machines the power to make decisions that could result in death reminds us more of the Terminator films, rather than the hopeful outlook typically linked to AI. Despite these concerns, the military continues to advance in AI technology.

The Pentagon has asked for substantial financial support from US legislators to enhance its artificial intelligence and networking skills, according to a DefenseScoop report. Furthermore, roles like Chief Digital and AI Officer have been created to help incorporate and utilize this technology across the department.

(Incorporating information from various sources)

Look for us on YouTube

Top Performances

Associated Narratives

Following fatalities, an Indian student is now missing in the US: What's happening?

AI delusions can be fixed, general artificial intelligence is roughly 5 years away: NVIDIA's Jensen Huang

'US ought to halt aid to Ukraine…China will not invade Taiwan': John Mearsheimer informs Palki Sharma

Apple finally introduces MM1, its multimodal AI model for generating text and images

Following fatalities, an Indian student is now missing in the US: What's happening?

AI delusions can be fixed, general artificial intelligence is roughly 5 years away: NVIDIA's Jensen Huang

'US ought to halt aid to Ukraine…China will not invade Taiwan': John Mearsheimer informs Palki Sharma

Apple finally introduces MM1, its multimodal AI model for generating text and images

is accessible on YouTube

Firstpost retains all rights © 2024.

Share: Facebook Twitter Linkedin
microsoft-logo-openai-emblem-and-warning-1024x1024-82692357.png
April 4, 2024 |

Whistleblower Alert: Microsoft’s OpenAI-Powered Copilot Raises Concerns over Harmful Imagery Generation

Activities

Divisions

Performances

Activities

Divisions

Performances

A Microsoft insider warns US authorities that OpenAI-driven Copilot can effortlessly produce dangerous images

Similar to Google's Gemini AI that generates images, Microsoft's Copilot, fueled by OpenAI's ChatGPT, can also craft highly believable yet deeply troubling false images. An informant alerted US governing bodies and Microsoft's board about this concern.

A Microsoft technician, Shane Jones, has expressed worry over harmful and insulting images created by the firm's AI image-creator tool. Jones, who sees himself as a whistleblower, has appealed to US regulatory bodies and the board of directors at Microsoft, pressing them to intervene, according to an Associated Press report.

Jones recently convened a meeting with the staff of the US Senate to talk about his worries. He also delivered a letter to the Federal Trade Commission (FTC). The FTC acknowledged receiving the letter but chose not to make any more statements.

Microsoft affirmed its dedication to resolving employee worries and valued Jones' work in examining the technology. Nonetheless, it advised utilizing in-house communication methods to probe and rectify the concerns.

Jones, a chief software engineering manager, has dedicated a quarter of a year attempting to tackle safety issues linked to Microsoft’s Copilot Designer. He emphasized the danger of the tool producing damaging material even when triggered with harmless cues. For instance, if prompted with the phrase 'car accident,' Copilot Designer could potentially display unsuitable, sexualized depictions of women.

Jones stressed to FTC Chair Lina Khan that Copilot Designer has the potential to generate inappropriate content, even in response to innocent user inquiries. As an example, he pointed out that it may produce sexualized images of women when asked about 'car accidents.' Furthermore, he underscored issues with other troubling content such as violent themes, political prejudice, underage alcohol consumption and substance abuse, violations of copyright laws, propagation of conspiracy theories, and religious symbols.

Jones has aired these worries before. Initially, Microsoft suggested he go to OpenAI with his discoveries, which he did. In December, he published a message to OpenAI on LinkedIn, resulting in a demand from Microsoft's legal department for its removal. However, Jones has continued to voice his concerns, taking them to the US Senate's Commerce Committee and the office of the Attorney General of Washington State.

Jones emphasized that the primary problem is with OpenAI's DALL-E model, but those using OpenAI's ChatGPT for AI image generation have a lower probability of facing damaging results due to the varying safety precautions taken by the respective companies.

"He communicated through a message that a lot of the issues related to Copilot Designer are already handled by the inherent protective measures of ChatGPT."

The appearance of remarkable AI image creators in 2022, such as OpenAI's DALL-E 2 and the later launch of ChatGPT, sparked a lot of attention from the public. This led to big technology companies like Microsoft and Google to create their own iterations.

Nonetheless, in the absence of strong protective measures, the technology poses threats. It allows users to generate damaging "deepfake" visuals portraying political personalities, war scenarios, or forced nudity, erroneously associating them with actual individuals.

Google put a temporary hold on the Gemini chatbot's ability to generate images due to issues raised regarding racial and ethnic portrayals, especially related to depicting people of color in uniforms from the Nazi-era.

(Incorporating information from various sources)

Search for us on YouTube

Highlighted Programs

Connected Narratives

NVIDIA's Jensen Huang asserts that AI hallucinations can be resolved, and anticipates that artificial general intelligence will be a reality in approximately 5 years.

OpenAI's Sora has the capability to produce lifelike nude videos, with developers swiftly working towards implementing a solution.

Apple has at last introduced MM1, its own multimodal AI model designed to generate text and images.

Microsoft has brought on board Mustafa Suleyman, the cofounder of DeepMind, to head their new consumer AI division.

You can find more on YouTube.

Firstpost holds all rights reserved, copyright 2024.

Share: Facebook Twitter Linkedin
a-whistleblower-alerting-on-a-tech-infus-1024x1024-77475066.png
April 4, 2024 |

Whistleblower Raises Alarm on OpenAI-Powered Microsoft Copilot’s Ability to Generate Harmful Imagery

Activities

Divisions

Performances

Activities

Divisions

Performances

Microsoft insider alerts US authorities that OpenAI-driven Copilot may generate damaging visuals 'too effortlessly'

Similar to Google's Gemini AI image generator, Microsoft's Copilot, fueled by OpenAI's ChatGPT, can also produce some extremely deceptive and potentially harmful false pictures. An informer reached out to US governing bodies and Microsoft's board to caution them about this concern.

A Microsoft engineer, Shane Jones, has expressed worries about inappropriate and damaging visuals generated by the firm's AI image-creation tool. Viewing himself as a whistleblower, Jones has penned letters to American watchdogs and Microsoft's governing body, pushing them to intervene, according to an Associated Press report.

Recently, Jones had a meeting with staff members of the US Senate to talk about his worries and forwarded a letter to the Federal Trade Commission (FTC). The FTC acknowledged receiving the letter but chose not to provide any more details.

Microsoft expressed its dedication to resolving employee issues and valued Jones' contributions in examining the technology. Nonetheless, it suggested utilizing in-house reporting systems to probe into and handle these problems.

Jones, who is a lead principal software engineer, has dedicated a quarter of a year to tackling security issues linked to Microsoft's Copilot Designer. He pointed out the potential danger of the tool creating damaging content even when given harmless prompts. For instance, if given the prompt 'car accident', Copilot Designer might generate unsuitable, sexualized pictures of women.

Jones stressed to FTC Chair Lina Khan that Copilot Designer can potentially produce damaging content, even in response to harmless user prompts. For example, it may produce inappropriate images of women when prompted with 'car accident.' He also underlined other problematic content such as violent scenes, political prejudice, underage alcohol and drug consumption, copyright violations, conspiracy theories, and religious symbols.

Jones has earlier voiced these worries in a public forum. Initially, Microsoft suggested that he should bring his discoveries to OpenAI. Subsequently, in December, he published a letter on LinkedIn, addressed to OpenAI, which resulted in Microsoft's legal department insisting on its removal. However, Jones remained undeterred, continuing to express his concerns to the US Senate's Commerce Committee and the Attorney General's office in Washington State.

Jones emphasized that although the primary concern is with OpenAI's DALL-E model, those using OpenAI's ChatGPT for AI image creation are less prone to coming across damaging results because of the distinct safety precautions taken by both firms.

He expressed through a message that most issues related to Copilot Designer are already handled by the inherent protective measures of ChatGPT.

The debut of remarkable AI image creators in 2022, such as OpenAI's DALL-E 2 and the later launch of ChatGPT, sparked considerable public curiosity. This led major technology companies like Microsoft and Google to start creating their own variations.

Nonetheless, without strong protective measures, the technology poses threats, allowing users to generate damaging "deepfake" pictures of political personalities, conflict situations, or unauthorized nudity, incorrectly associating them with actual persons.

Due to arising issues, Google has momentarily halted the image creation capability of the Gemini chatbot. This is mainly due to controversies involving representations of race and ethnicity, such as putting individuals of color in uniforms from the Nazi period.

(Incorporating information from various sources)

Search for us on YouTube

Headlining Programs

Associated Reports

AI hallucinations can be resolved, and general artificial intelligence is approximately half a decade away, according to NVIDIA's Jensen Huang.

OpenAI's Sora has the ability to produce life-like nude videos, with developers scrambling to apply a solution.

Apple has at last rolled out MM1, its multimodal AI system designed for generating text and images.

Microsoft has brought on board DeepMind cofounder Mustafa Suleyman to spearhead their new consumer AI department.

AI hallucinations are manageable, and we could see general artificial intelligence in roughly five years, says NVIDIA’s Jensen Huang.

OpenAI's Sora is capable of creating nude videos that appear very realistic, and developers are in a hurry to create a fix.

Apple has introduced MM1, its new multimodal AI model that generates both text and images.

Microsoft has recruited DeepMind cofounder Mustafa Suleyman to take the lead of their new consumer AI team.

Find more on YouTube.

Firstpost holds all rights and protections, copyright @ 2024.

Share: Facebook Twitter Linkedin
microsoft-logo-openai-icon-siren-broken-1024x1024-30363708.png
April 4, 2024 |

Whistleblower Alarm: Microsoft’s OpenAI-Powered Copilot Raises Concerns Over Harmful Imagery Generation

Events

Categories

Programs

Events

Categories

Programs

Microsoft informant alerts US authorities that OpenAI-supported Copilot can effortlessly generate destructive visuals

Similar to Google’s Gemini AI image generator, Microsoft’s Copilot, backed by OpenAI’s ChatGPT, can also produce highly concerning yet realistic counterfeit visuals. An informant has alerted US officials and Microsoft’s board by sending letters warning them about this problem.

A Microsoft employee has expressed worries about damaging and inappropriate images produced by the company's AI image-creation software. Shane Jones, who views himself as a truth-teller, has written to American regulatory bodies and the board of directors at Microsoft, encouraging them to intervene, as indicated in a news report by the Associated Press.

Jones had a recent meeting with staff members of the US Senate to talk about his worries and he also wrote a letter to the Federal Trade Commission (FTC). The FTC acknowledged receiving the letter but chose not to give any more comments.

Microsoft expressed its dedication to tackling employee issues and valued Jones' contribution to the technology testing. Nevertheless, it suggested utilizing internal communication routes to probe and rectify the problems.

Jones, a chief software engineering manager, has devoted a quarter of a year to tackling security issues associated with Microsoft's Copilot Designer. He underscored the potential danger of the tool producing damaging content, even in response to harmless instructions. To illustrate, if the tool is given a cue like 'car accident,' it might generate unsuitable, sexually suggestive imagery involving women.

Jones underscored to FTC Chair Lina Khan that Copilot Designer can potentially create damaging content, even in response to benign user inputs. For instance, it may sometimes produce inappropriate sexualized images of women in response to a 'car accident' prompt. He also brought to attention other troubling content such as violent scenes, partisan bias, minor alcohol consumption and drug abuse, violations of copyright, unfounded conspiracy theories, and religious symbols.

Jones has earlier voiced these worries in public. Suggested by Microsoft to contact OpenAI, he presented his discoveries to them. In December, he published a letter on LinkedIn directed towards OpenAI, which resulted in Microsoft's legal department insisting on its removal. Regardless, Jones has continued, taking his issues to the US Senate's Commerce Committee and the Office of the Attorney General in Washington State.

Jones emphasized that the primary concern is with OpenAI's DALL-E model. However, users of OpenAI's ChatGPT for AI image generation are less prone to come across damaging results because of the various safety precautions that the two firms have put in place.

He communicated through text that a lot of the issues associated with Copilot Designer are already addressed by the inherent protective measures of ChatGPT.

The arrival of remarkable AI image creators in 2022, such as OpenAI's DALL-E 2 and the later launch of ChatGPT, sparked considerable attention from the public. This led major technology companies like Microsoft and Google to create their own editions.

Nonetheless, absent strong protective measures, the technology poses dangers, permitting users to generate damaging "deepfake" pictures portraying political leaders, war scenarios, or involuntary nudity, incorrectly associating them with actual people.

Following worries, Google momentarily halted the image creation function of the Gemini chatbot. This was primarily due to disputes over representations of race and ethnicity, including situations where individuals of color were depicted in military uniforms from the Nazi era.

(Incorporating information from various sources)

Search for us on YouTube

Top-Rated Programs

Associated Reports

NVIDIA's Jensen Huang believes AI hallucinations can be resolved and anticipates artificial general intelligence to be achieved in about 5 years.

OpenAI's Sora has the capability to create lifelike nude clips, prompting developers to quickly find a solution.

Apple has at last unveiled MM1, its combined AI framework for generating text and images.

Microsoft has recruited Mustafa Suleyman, cofounder of DeepMind, to head their new customer AI group.

NVIDIA's Jensen Huang is confident that AI hallucinations can be tackled and expects general artificial intelligence to be available in roughly 5 years.

OpenAI's Sora has the ability to produce convincing nude films, causing developers to hasten in finding a remedy.

Apple has ultimately released MM1, its integrated AI system for text and image production.

Microsoft has engaged Mustafa Suleyman, the cofounder of DeepMind, to guide their fresh consumer AI division.

Find us on YouTube.

Firstpost holds the exclusive rights, protected by copyright law, until the year

Share: Facebook Twitter Linkedin
microsoft-logo-entangled-in-warning-sign-1024x1024-68125372.png
April 4, 2024 |

Whistleblower Alert: How OpenAI-Powered Microsoft’s Copilot Might Be Creating Harmful Imagery Too Easily

Events

Categories

Performances

Events

Categories

Performances

Microsoft insider informs US authorities that OpenAI-backed Copilot can generate damaging visuals 'too effortlessly'

Similar to Google’s Gemini AI that generates images, Microsoft’s Copilot, supported by OpenAI’s ChatGPT, can also churn out some highly troublesome yet extremely believable counterfeit visuals. An informant alerted US regulatory bodies and Microsoft’s board about this potential problem.

A Microsoft worker, Shane Jones, has voiced worries about the company's AI image-creating tool generating damaging and inappropriate pictures. Identifying himself as a whistleblower, Jones has reached out to US regulatory bodies and Microsoft's board members, appealing for immediate action, according to a news release by the Associated Press.

Recently, Jones had a meeting with the staff members of the US Senate to talk about his worries and sent a letter to the Federal Trade Commission (FTC). The FTC confirmed they received the letter but chose not to comment further.

Microsoft affirmed its dedication to tackling employee worries and valued Jones' contribution in evaluating the technology. Nevertheless, it advised utilizing in-house reporting avenues to scrutinize and deal with the problems.

Jones, who is a senior lead in software engineering, has spent a quarter of a year tackling safety issues related to Microsoft's Copilot Designer. He pointed out the danger of the tool creating damaging content even when given harmless cues. To illustrate, if given a cue like 'car accident,' Copilot Designer might display inappropriate images that sexually objectify women.

Jones underscored to FTC Chair Lina Khan that Copilot Designer can produce dangerous content even when users input harmless prompts. For example, when the prompt 'car accident' is used, it occasionally generates sexualized images of women. He further pointed out other worry-inducing content such as violent material, political slant, minor alcohol and drug consumption, violation of copyright laws, unfounded conspiracy theories, and religious symbols.

Jones had earlier voiced these worries in public. Initially, Microsoft suggested he should take his discoveries to OpenAI, which he did. In December, he shared a letter on LinkedIn targeting OpenAI, resulting in Microsoft's legal department insisting on its removal. However, Jones has continued to press on, taking his issues to the US Senate's Commerce Committee and the Attorney General's office in Washington State.

Jones emphasized that the primary problem is with OpenAI's DALL-E model. However, users of OpenAI's ChatGPT for AI image creation are less likely to experience damaging results because the two firms have put different safety precautions in place.

"He communicated through text that numerous issues related to Copilot Designer are already handled by the inherent protections of ChatGPT."

The rise of remarkable AI image creators in 2022, such as OpenAI's DALL-E 2 and the following launch of ChatGPT, sparked considerable public intrigue. This led technology powerhouses, like Microsoft and Google, to craft their own equivalents.

Nonetheless, without strong protective measures, the technology poses threats, allowing users to create damaging "deepfake" pictures of political leaders, war scenarios, or unauthorised nudity, wrongfully associating them with actual individuals.

Addressing certain issues, Google put on hold the image creation function of the Gemini chatbot. This was primarily because of disputes related to representations of race and ethnicity, such as showing individuals of color in uniforms from the Nazi period.

(Incorporating information from various sources)

Locate us on YouTube

Popular Programs

Associated Articles

NVIDIA's Jensen Huang believes AI hallucinations can be solved and estimates artificial general intelligence is about half a decade away.

OpenAI's Sora has the capacity to create convincing nude videos, prompting developers to expedite a solution.

Apple finally unveils MM1, their new multimodal AI model designed for creating text and images.

Microsoft recruits Mustafa Suleyman, cofounder of DeepMind, to head up their new consumer AI division.

These are available on YouTube.

All content is protected and owned by Firstpost, 2024.

Share: Facebook Twitter Linkedin
a-whistleblower-revealing-dangers-in-ai-1024x1024-15046149.png
April 4, 2024 |

Whistleblower Warns of Harmful Imagery Risks in OpenAI-Powered Microsoft Copilot: A Deep Dive into AI Generated Content Concerns

Activities

Divisions

Programs

Activities

Divisions

Programs

Microsoft insider warns US authorities that OpenAI-backed Copilot can generate damaging visuals 'too effortlessly'

Similar to Google's Gemini AI image generator, Microsoft's Copilot, which is fueled by OpenAI's ChatGPT, can also produce some extremely troublesome yet very believable counterfeit images. An informant alerted US oversight bodies and Microsoft's board about this concern.

A Microsoft employee has voiced worries about damaging and inappropriate visuals created by the company's AI image-creator tool. Shane Jones, who sees himself as a whistleblower, has written to US regulatory bodies and Microsoft's board members, encouraging them to intervene, according to an Associated Press report.

Jones recently had a meeting with the staff of the US Senate to talk about his worries and he also sent a letter to the Federal Trade Commission (FTC). The FTC acknowledged receiving the letter but chose not to comment further.

Microsoft affirmed its dedication to resolving employee issues and expressed gratitude towards Jones for his work in technology examination. Nonetheless, it suggested the utilization of internal communication pathways for probing and dealing with the problems.

Jones, a senior software engineering manager, has devoted a quarter of a year addressing security issues regarding Microsoft's Copilot Designer. He underscored the potential danger of the tool producing damaging content, even with harmless instructions. For instance, if the term 'car accident' is used as a prompt, Copilot Designer might generate unsuitable images that sexually objectify women.

Jones underscored to FTC Chair Lina Khan the considerable dangers posed by Copilot Designer as it sometimes produces inappropriate content even in response to innocuous user prompts. For example, a prompt such as 'car accident' could yield sexually explicit images of women. He further brought attention to other troubling content such as violent imagery, partisan bias, promotion of underage alcohol consumption and drug use, violations of copyright laws, propagation of conspiracy theories, and religious symbols.

Jones had earlier voiced his worries in public. Guided by Microsoft to reach out to OpenAI with his discoveries, he did so. In December, he made public a letter on LinkedIn aimed at OpenAI, causing Microsoft's legal department to insist on its removal. Nevertheless, Jones has continued to press on, taking his worries to the US Senate's Commerce Committee and the office of the Attorney General of Washington State.

Jones pointed out that although the primary concern is with OpenAI's DALL-E model, those using OpenAI's ChatGPT for AI image creation are less prone to come across detrimental results. This is due to the varying safety procedures put in place by the two corporations.

"Through a text message, he expressed that ChatGPT's inherent safety measures already handle a lot of the issues associated with Copilot Designer."

The debut of remarkable AI image generators in 2022, such as OpenAI's DALL-E 2 and the following launch of ChatGPT, sparked considerable public intrigue. This led technology powerhouses like Microsoft and Google to create their own variants.

Nonetheless, in the absence of strong protective measures, technology has its dangers. It allows users to generate damaging "deepfake" visuals of political leaders, scenes of war, or involuntary explicit content, wrongfully associating them with actual people.

Due to various concerns, particularly related to the portrayal of race and ethnicity, Google has temporarily disabled the image creation function of the Gemini chatbot. This includes problematic representations like showing people of color in Nazi-era military uniforms.

(Incorporating information from various sources)

Look for us on YouTube

Headlining Programs

Associated Articles

NVIDIA's Jensen Huang states that AI hallucinations can be fixed and complete artificial intelligence is around 5 years in the future

OpenAI's Sora has the capability to create lifelike nude clips, with developers swiftly working to implement a solution

Apple has at last introduced MM1, its AI model capable of generating both text and images

Microsoft has recruited Mustafa Suleyman, cofounder of DeepMind, to head their new consumer AI division

Can be found on YouTube.

Firstpost holds the exclusive rights, protected by copyright until 2024.

Share: Facebook Twitter Linkedin
whistleblower-sounding-alarm-with-micros-1024x1024-75887922.png
April 4, 2024 |

Microsoft Whistleblower Raises Alarm on OpenAI-Powered Copilot’s Potential for Harmful Imagery Creation

Occasions

Divisions

Programs

Occasions

Divisions

Programs

Microsoft insider informs US authorities about OpenAI-backed Copilot's ability to generate damaging visuals 'too effortlessly'

Similar to Google’s Gemini AI image generator, Microsoft’s Copilot, supported by OpenAI’s ChatGPT, can also produce some highly troublesome yet persuasive counterfeit pictures. An informant alerted US regulatory bodies and Microsoft’s board about this concern.

A Microsoft employee has expressed worries regarding the damaging and inappropriate images generated by the company's AI image-creator tool. Shane Jones, who sees himself as an informant, has written to US regulatory bodies and the board of directors of Microsoft, asking them to intervene, according to an Associated Press report.

Jones had a recent meeting with US Senate personnel where he shared his worries and also sent a letter to the Federal Trade Commission (FTC). The FTC acknowledged receiving the letter but chose not to respond further.

Microsoft expressed its dedication to tackling employee issues and valued Jones' contribution in examining the technology. Nevertheless, it advised utilizing in-house reporting systems to scrutinize and rectify the problems.

Jones, a senior software engineering manager, has dedicated a quarter of a year to tackling security issues associated with Microsoft's Copilot Designer. He drew attention to the danger of the tool creating damaging materials, even when given harmless cues. For instance, when given the cue 'car accident,' Copilot Designer could possibly produce unsuitable, sexually suggestive pictures of women.

Jones stressed to FTC Chair Lina Khan that Copilot Designer carries substantial threats as it produces damaging content even in response to harmless user prompts. For example, if given the term 'car accident,' it may occasionally display sexually degrading pictures of women. He further underscored additional problematic content such as violent scenes, partisan bias, underage alcohol and drug consumption, violations of copyright, unfounded conspiracy theories, and religious symbols.

Jones had earlier voiced his worries in public. At first, Microsoft suggested that he should reach out to OpenAI with his findings, which he did. He later published a letter to OpenAI on LinkedIn in December, which resulted in Microsoft's legal department insisting on its removal. Nonetheless, Jones continued to press on, presenting his issues to the Commerce Committee of the US Senate and the Attorney General's office in Washington State.

Jones pointed out that the primary problem pertains to OpenAI's DALL-E model, yet users employing OpenAI's ChatGPT for AI image production are less prone to come across damaging results because of the distinct safety precautions taken by the two firms.

"He communicated through a text message that most of the issues associated with Copilot Designer are already taken care of by the intrinsic safety measures of ChatGPT."

In 2022, the development of remarkable AI image generators like OpenAI's DALL-E 2 and the subsequent launch of ChatGPT sparked considerable public intrigue. This led major technology companies such as Microsoft and Google to create their own iterations.

Nevertheless, in the absence of strong protective measures, this technology poses dangers, allowing users to generate damaging "deepfake" pictures portraying political leaders, scenes of war, or nonvoluntary nudity, inaccurately ascribing them to actual persons.

Due to various issues, particularly involving portrayals of race and ethnicity like showing people of color in Nazi-era uniforms, Google has temporarily halted the image creation capability of the Gemini chatbot.

(Incorporating information from various sources)

Look for us on YouTube

Highlighted Programs

Associated Narratives

NVIDIA's Jensen Huang believes AI hallucinations can be resolved, anticipates artificial general intelligence in approximately 5 years

OpenAI's Sora has the ability to create lifelike nude videos, developers are hastily working on a solution

Apple has at last introduced MM1, its integrated AI system for generating text and images

Microsoft recruits Mustafa Suleyman, cofounder of DeepMind, to head their new customer-oriented AI group

NVIDIA's Jensen Huang asserts that AI hallucinations are manageable, and predicts the arrival of artificial general intelligence in roughly 5 years

OpenAI's Sora possesses the capability to produce authentic seeming nude videos, with developers swiftly responding with a fix

Apple has officially rolled out MM1, its hybrid AI framework for the creation of text and visuals

Microsoft brings on board Mustafa Suleyman, one of the founders of DeepMind, to steer their new AI team focusing on consumers

This can be found on YouTube.

All rights reserved under Firstpost copyright, 2024.

Share: Facebook Twitter Linkedin
a-google-engineer-stealing-ai-blueprints-1024x1024-40053011.png
April 4, 2024 |

Unveiling the Google Espionage Saga: Former Engineer Charged with Theft of AI Secrets for Chinese Firms

Happenings

Divisions

Programs

Happenings

Divisions

Programs

Ex-Google engineer accused of pilfering AI confidential data, trading it to Chinese tech firms

Linwei Ding, an ex-software engineer at Google, is accused of illicitly taking more than 500 distinct confidential files from Google and trading them to a Chinese firm. While still employed with Google at one of their U.S. campuses, Ding assumed the role of a CTO in a tech firm based in China.

The Department of Justice disclosed on Wednesday that Linwei Ding, a past software engineer at Google, who also goes by the name Leon Ding, has been charged for supposedly pilfering artificial intelligence proprietary information from the corporation. Ding, who is a citizen of China, was seized in Newark, California, and stands accused of four instances of federal trade secret pilferage, each of which could result in a maximum jail term of 10 years.

During a conference held by the American Bar Association in San Francisco, Attorney General Merrick Garland declared the allegations, stressing the continuous worries about economic spying by China, and the potential threats to national security due to progress in artificial intelligence and other evolving technologies.

FBI Chief, Christopher Wray, highlighted the gravity of the current issue by saying, "The most recent charges reveal the extreme measures Chinese company affiliates are ready to take to pilfer American innovation." Wray stressed the harmful effects of this theft on jobs in America and its considerable repercussions on the country's economy and national security.

Google announced that it had discovered an employee stealing "multiple documents" and immediately reported the incident to the police.

Jose Castaneda, a representative from Google, stated that the company has strict protocols to avoid the unauthorized leakage of its classified business data and proprietary secrets. He further elaborated that after an inquiry, it was found that the employee had illegally accessed several documents, which led Google to immediately contact the police. Castaneda thanked the FBI for their help in protecting Google's data and reassured that the company will continue to cooperate with the police.

In the cut-throat world of advanced technology, artificial intelligence acts as a main battleground for competitors, having substantial impacts on both business prosperity and security.

The charges revealed on Wednesday in California's Northern District suggest that Linwei Ding, a Google employee since 2019 with access to confidential data about the company's high-powered computing centers, began the unauthorized movement of numerous files to a personal Google Cloud account two years ago.

Prosecutors revealed that not long after the initial theft, Linwei Ding was proposed the role of lead technology officer at an emerging tech firm in China. This firm emphasized its utilization of AI technology and lured Ding with an estimated monthly pay of around $14,800, in addition to a yearly bonus and shares in the company.

The charges detailed that Ding journeyed to China, participated in company investor discussions, and strived to obtain funds for its functioning. Besides, he founded and took on the role of CEO at another China-based startup, which concentrated on developing large AI models using supercomputing chips.

Interestingly, Ding failed to inform Google about his associations with these Chinese firms, a fact that came to light when it was discovered he was a low-ranking employee during the probe. Ding stepped down from his position at Google on December 26th.

Not long after he left, Google authorities found out that Ding had posed as the CEO of a Chinese firm at a financial meet-up in Beijing just three days afterwards. Additionally, CCTV recordings showed that another staff member had been using Ding’s entry pass at Google's American premises to make it seem like Ding was in the office, when in reality, he was in China.

After discovering these inconsistencies, Google immediately revoked Ding's network access, remotely secured his laptop, and started examining his past network activities. Later in January, the FBI conducted a raid at Ding's home, confiscating all his electronic devices. A separate warrant was issued to access Ding's personal accounts, which unveiled more than 500 distinct confidential files supposedly pilfered from Google.

(Incorporating information from various sources)

Look for us on YouTube

Leading Series

Associated Narratives

Subsequent to fatalities, an Indian learner is now unaccounted for in the US: What's happening?

A Chinese individual collaborates with a Canadian to pilfer Tesla's battery innovation; the pair establish their own business, but are apprehended in the US

AI delusions are manageable, all-encompassing artificial intelligence is roughly 5 years off: Says Jensen Huang of NVIDIA

'US ought to withdraw support to Ukraine…China won't invade Taiwan': States John Mearsheimer in a conversation with Palki Sharma

Following deaths, an Indian scholar is currently lost in the US: What's the situation?

A Chinese citizen teams up with a Canadian to rob Tesla's battery technology; the twosome start their own venture, but are detained in the US

AI misperceptions are fixable, comprehensive artificial intelligence is nearly 5 years in the future: Asserts Jensen Huang of NVIDIA

'US needs to cease aid to Ukraine…China has no plans to assault Taiwan': John Mearsheimer informs Palki Sharma

is present on YouTube

Firstpost owns all rights and protections under law © 2024.

Share: Facebook Twitter Linkedin
whistleblower-revealing-harmful-ai-image-1024x1024-5925812.png
April 4, 2024 |

Microsoft Whistleblower Warns of Harmful Imagery Created by OpenAI-powered Copilot: An Insider’s Perspective on the Risks of AI Image Generators

Activities

Divisions

Programs

Activities

Divisions

Programs

A Microsoft informant has informed US authorities that OpenAI's Copilot can effortlessly generate harmful images. Similar to Google's Gemini AI that generates fake images, Microsoft's Copilot, which is backed by OpenAI’s ChatGPT, can also produce potentially troublesome and highly realistic counterfeit images. The informant raised this concern to US regulatory bodies and Microsoft's board through letters.

A Microsoft engineer, Shane Jones, voiced worries about the offensive and damaging pictures created by the company's AI image-generator tool. Identifying himself as a whistleblower, Jones has written to American regulatory bodies and Microsoft's board, urging them to intervene, according to an Associated Press report.

Jones recently had a meeting with the staff members of the US Senate to talk about his worries, and he sent a letter to the Federal Trade Commission (FTC). They acknowledged getting the letter but chose not to comment further.

Microsoft expressed its dedication to tackle employee issues and valued Jones' contribution in examining the technology. Nonetheless, it suggested utilizing in-house reporting systems for probing and resolving the problems.

Jones, a chief software engineering manager, has dedicated a quarter of a year to tackling safety issues related to Microsoft's Copilot Designer. He underscored the danger of the tool producing damaging content even in response to harmless commands. For instance, if given the prompt 'car accident', Copilot Designer could potentially generate unsuitable, sexually suggestive images of women.

Jones stressed to FTC Chair Lina Khan that Copilot Designer could potentially produce harmful content even with benign user prompts. For example, a simple prompt like 'car accident' might sometimes result in sexually suggestive images of women. He also pointed out other worrisome content such as violent themes, partisan bias, underage alcohol consumption and drug use, violations of copyright laws, unfounded conspiracy theories, and religious symbolism.

In the past, Jones has voiced his worries openly. He was initially guided by Microsoft to bring his discoveries to OpenAI, which he did. In December, he put up a message on LinkedIn directed towards OpenAI, causing Microsoft's legal department to insist on its removal. Despite these obstacles, Jones has remained determined, presenting his issues to both the US Senate's Commerce Committee and the Attorney General's office in Washington State.

Jones emphasized that even though the primary problem is with OpenAI's DALL-E model, those using OpenAI's ChatGPT for AI image creation have a lower risk of encountering damaging results because of distinct safety measures put in place by both firms.

"He communicated through a message that numerous worries regarding Copilot Designer are already addressed by the inherent protective measures of ChatGPT."

In 2022, remarkable AI image generators such as OpenAI's DALL-E 2 and the later launch of ChatGPT created a lot of buzz among the public. This led to tech powerhouses like Microsoft and Google creating their own variants.

Nonetheless, in the absence of strong protective measures, this technology poses threats, allowing users to generate damaging "deepfake" pictures portraying political personalities, war scenarios, or involuntary nudity, wrongly associating them with actual persons.

Due to worries raised, Google has temporarily halted the image creation ability of the Gemini chatbot. This decision came about mainly because of disputes related to portrayals of race and ethnicity, like showing individuals of color in Nazi-era military uniforms.

(Incorporating information from various sources)

Search for us on YouTube

Highlighted Programs

Associated Reports

Artificial Intelligence (AI) hallucinations can be fixed and general AI is approximately 5 years from now, according to NVIDIA's Jensen Huang.

Sora, from OpenAI, is capable of creating lifelike nude videos, prompting developers to quickly implement a solution.

Apple has at last rolled out MM1, its new AI model that can generate both text and images.

Microsoft has recruited the cofounder of DeepMind, Mustafa Suleyman, to head their new AI team for consumers.

Artificial Intelligence (AI) hallucinations can be fixed and general AI is approximately 5 years from now, according to NVIDIA's Jensen Huang.

Sora, from OpenAI, is capable of creating lifelike nude videos, prompting developers to quickly implement a solution.

Apple has at last rolled out MM1, its new AI model that can generate both text and images.

Microsoft has recruited the cofounder of DeepMind, Mustafa Suleyman, to head their new AI team for consumers.

Available on YouTube.

Firstpost holds all rights and is protected by copyright in 2024.

Share: Facebook Twitter Linkedin