<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>OpenAI &#8211; Mazzaltov World News</title>
	<atom:link href="https://news.mazzaltov.com/tag/openai/feed/" rel="self" type="application/rss+xml" />
	<link>https://news.mazzaltov.com</link>
	<description>Your Reliable Source of Global News</description>
	<lastBuildDate>Tue, 10 Mar 2026 11:54:13 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=6.9.4</generator>
<site xmlns="com-wordpress:feed-additions:1">193366028</site>	<item>
		<title>CANADA: Family of child injured in school shooting sues OpenAI</title>
		<link>https://news.mazzaltov.com/canada-family-of-child-injured-in-school-shooting-sues-openai/?utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=canada-family-of-child-injured-in-school-shooting-sues-openai</link>
		
		<dc:creator><![CDATA[Loneson Mondo]]></dc:creator>
		<pubDate>Tue, 10 Mar 2026 11:54:10 +0000</pubDate>
				<category><![CDATA[Mazzaltov News]]></category>
		<category><![CDATA[Tech News]]></category>
		<category><![CDATA[Canada]]></category>
		<category><![CDATA[OpenAI]]></category>
		<guid isPermaLink="false">https://news.mazzaltov.com/?p=35432</guid>

					<description><![CDATA[The family of a girl critically injured during a mass shooting at a Canadian school is&#160;suing ChatGPT-maker OpenAI, claiming it had been aware the suspect had been planning an attack&#8230; ]]></description>
										<content:encoded><![CDATA[
<p class="">The family of a girl critically injured during a mass shooting at a Canadian school is&nbsp;<a target="_blank" href="https://www.courthousenews.com/wp-content/uploads/2026/03/tumbler-ridge-openAI.pdf" rel="noreferrer noopener">suing ChatGPT-maker OpenAI</a>, claiming it had been aware the suspect had been planning an attack but failed to alert the authorities.</p>



<p class="">Twelve-year-old Maya Gebala was&nbsp;<a href="https://www.bbc.co.uk/news/articles/c0rjg9ddgkqo">shot in the neck and head</a>&nbsp;in the attack in Tumbler Ridge on 10 February and remains in hospital.</p>



<p class="">An initial ChatGPT account linked to the suspect, 18‑year‑old Jesse Van Rootselaar, was banned by OpenAI in June 2025 due to the nature of her conversations with the chatbot, but Canadian police were not notified.</p>



<p class="">OpeanAI said it was committed to making &#8220;meaningful changes&#8221; to help prevent similar tragedies in the future.</p>



<p class="">Eight people were killed in the attack, including five young children and the suspect&#8217;s mother, in one of the deadliest shootings in Canadian history.</p>



<p class="">The civil lawsuit, brought by Gebala&#8217;s mother Cia Edmonds, alleges Rootselaar set up an account with ChatGPT before she turned 18 &#8211; something users can do with parental consent.</p>



<p class="">The plaintiffs allege no age verification took place on the site.</p>



<p class="">The lawsuit claims the suspect saw the chatbot as a &#8220;trusted confidante&#8221; and described &#8220;various scenarios involving gun violence&#8221; to it over several days in late spring or early summer 2025.</p>



<p class="">Twelve OpenAI employees then reportedly flagged the posts as &#8220;indicating an imminent risk of serious harm to others&#8221; and recommended Canadian law enforcement was informed, the lawsuit alleges.</p>



<p class="">Instead, it is alleged the request to contact the authorities was &#8220;rebuffed&#8221; and the only action taken was to ban Rootselaar&#8217;s account.</p>



<p class="">OpenAI has&nbsp;<a href="https://www.bbc.co.uk/news/articles/cn4gq352w89o">previously said</a>&nbsp;it did not alert police because the account did not meet its threshold of a credible or imminent plan for serious physical harm to others.</p>



<p class="">The suspect was able to then open a second ChatGPT account, despite being flagged by OpenAI systems in the past, and &#8220;continue planning scenarios involving gun violence&#8221;.</p>



<p class="">The lawsuit claims the company &#8220;had specific knowledge of the shooter&#8217;s long-range planning of a mass casualty event,&#8221; but &#8220;took no steps to act upon this knowledge&#8221;.</p>



<p class="">The plaintiffs state as a result of the company&#8217;s conduct, Gebala, who was shot at three times after trying to lock a library door to keep out the shooter, has suffered a &#8220;catastrophic brain injury&#8221;.</p>



<h2 class="wp-block-heading">OpenAI&#8217;s response</h2>



<p class="">In a statement, an OpenAI spokesperson called the events an &#8220;unspeakable tragedy&#8221;, adding its thoughts remained with the victims, their families and the community.</p>



<p class="">&#8220;OpenAI remains committed to working with government and law enforcement officials to make meaningful changes that help prevent tragedies like this in the future,&#8221; a spokesperson said.</p>



<p class="">On 4 March, the CEO of OpenAI, Sam Altman, virtually met Canada&#8217;s artificial intelligence minister, Evan Solomon, and the premier of British Columbia, David Eby.</p>



<p class=""><a target="_blank" href="https://www.wsj.com/tech/ai/canada-says-openai-ceo-altman-pledged-to-toughen-safety-protocols-7962b26b?gaa_at=eafs&amp;gaa_n=AWEtsqc--7TWpomEuoj07OvUxG_kGorBNDRnyYfaXuEDdYE7mxXo8ExfuBceh08JrrU%3D&amp;gaa_ts=69afebbd&amp;gaa_sig=szwWCvY8gMXKI4Wzpl2QC9ahYe86muFPtlYU8Cz8vfJin1Vr9jVE8NjKXwindIZL2ksRvri3sunmI2bgl8RXAQ%3D%3D" rel="noreferrer noopener">According to the Wall Street Journal</a>, Altman &#8220;pledged to strengthen protocols on notifying police over potentially harmful interactions&#8221; and to apologise to the Tumbler Ridge community.</p>



<p class="">In&nbsp;<a target="_blank" href="https://cdn.openai.com/pdf/8e938d69-0b67-4994-b9ff-683733ed587e/openai-letter-minister-solomon.pdf" rel="noreferrer noopener">an open letter to Canadian officials on 26 February</a>, penned by OpenAI&#8217;s vice-president of global policy and shared with media outlets, the company said it had implemented a series of changes in recent months, including enlisting the help of &#8220;mental health and behavioural experts&#8221; to assess cases and making the criteria for referral to police &#8220;more flexible&#8221;.</p>



<p class="">Because of the changes, OpenAI said it would have reported the suspect&#8217;s ChatGPT account under the new guidelines.</p>



<p class="">&#8220;We commit to strengthening our detection systems to better prevent attempts to evade our safeguards and prioritize identifying the highest risk offenders,&#8221; the company wrote.</p>



<p class="">OpenAI said it would also establish a direct point of contact with Canadian law enforcement so it can quickly flag any possible future cases with &#8220;potential for real world violence&#8221;.</p>



<p class="">Canada&#8217;s AI minister Evan Solomon said on 27 February that while legislators saw a willingness by the tech firm to improve its protocols, &#8220;we have not yet seen a detailed plan for how these commitments will be implemented in practice&#8221;.</p>
]]></content:encoded>
					
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">35432</post-id>	</item>
		<item>
		<title>USA: What might really be behind failed bid for OpenAI</title>
		<link>https://news.mazzaltov.com/usa-what-might-really-be-behind-failed-bid-for-openai/?utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=usa-what-might-really-be-behind-failed-bid-for-openai</link>
		
		<dc:creator><![CDATA[Loneson Mondo]]></dc:creator>
		<pubDate>Tue, 18 Feb 2025 13:00:00 +0000</pubDate>
				<category><![CDATA[Business News]]></category>
		<category><![CDATA[Tech News]]></category>
		<category><![CDATA[USA News]]></category>
		<category><![CDATA[AI]]></category>
		<category><![CDATA[Elon Musk]]></category>
		<category><![CDATA[OpenAI]]></category>
		<category><![CDATA[Sam Altman]]></category>
		<guid isPermaLink="false">https://news.mazzaltov.com/?p=23874</guid>

					<description><![CDATA[OpenAI&#8217;s board of directors has officially rejected Elon Musk&#8217;s nearly $100bn offer for the maker of what is the world&#8217;s best-known artificial intelligence (AI) tool, ChatGPT. But the unsolicited bid&#8230; ]]></description>
										<content:encoded><![CDATA[
<p class="">OpenAI&#8217;s board of directors has officially rejected Elon Musk&#8217;s nearly $100bn offer for the maker of what is the world&#8217;s best-known artificial intelligence (AI) tool, ChatGPT.</p>



<p class="">But the unsolicited bid might not be a failure &#8211; at least as far as Musk is concerned, experts say.</p>



<p class="">That&#8217;s because the offer could still complicate CEO Sam Altman&#8217;s plans to transform OpenAI from a non-profit controlled entity to a for-profit company.</p>



<p class="">Musk is &#8220;basically trying to stymie OpenAI&#8217;s growth trajectory,&#8221; said University of Cambridge associate teaching professor Johnnie Penn in an&nbsp;<a target="_blank" href="https://www.youtube.com/watch?v=Kv8NFsGjpew&amp;t=81s" rel="noreferrer noopener">interview</a>&nbsp;with the BBC.</p>



<h2 class="wp-block-heading">Profit &amp; non-profit</h2>



<p class="">Last week, Musk and a consortium of investors including Hollywood superagent Ari Emanuel&nbsp;<a target="_blank" href="https://www.documentcloud.org/documents/25524553-116-1/" rel="noreferrer noopener">tabled</a>&nbsp;a $97.4bn (£78.4bn) offer for all of OpenAI&#8217;s assets.</p>



<p class="">It was a huge sum &#8211; but less than the $157bn the firm was valued at in a funding round just four months ago, and much lower than the $300bn that some think it is worth now.</p>



<p class="">Complicating all of this is OpenAI&#8217;s&nbsp;<a target="_blank" href="https://openai.com/our-structure/" rel="noreferrer noopener">unusual structure</a>&nbsp;which involves a partnership between non-profit and for-profit arms.</p>



<p class="">Mr Altman is understood to&nbsp;<a href="https://www.bbc.co.uk/news/articles/c8rd0jd1g6xo">want to change that</a>, stripping it of its non-profit board.</p>



<p class="">That involves costs which Mr Musk is seemingly trying to inflate.</p>



<p class="">&#8220;What Musk is trying to do here is raise the perceived value of the non-profit arm of OpenAI, so that OpenAI has to pay more to get out of the obligations it has to its own non-profit,&#8221; said Dr Penn.</p>



<p class="">The value of its non-profit assets isn&#8217;t clear. With his bid, Musk was floating a price, according to Cornell University senior lecturer Lutz Finger, who is also the founder and CEO of AI startup R2Decide.</p>



<p class="">&#8220;By Musk putting a price tag on the non-profit part, he makes the split way more expensive for Altman to do,&#8221; Mr Finger told the BBC. &#8220;It&#8217;s very simple.&#8221;</p>



<p class="">Mr Musk justified his actions by saying he wants to return OpenAI &#8211; which he co-founded &#8211; to its non-profit roots and original mission of developing AI for the benefit of humanity.</p>



<p class="">Others, though, suggest he has somewhat less noble motives linked to his own AI company xAI and chatbot Grok, which have received a lacklustre response from the public.</p>



<p class="">&#8220;Musk has missed the AI train, somewhat. He&#8217;s behind, and he has made several attempts to catch up,&#8221; Mr Finger said.</p>



<p class="">Now, Mr Finger says, Mr Musk is trying to kneecap his most formidable competitor.</p>



<p class="">An already-tense relationship appeared to worsen further last week with Mr Altman taunting Mr Musk&#8217;s offer on X, and Mr Musk retorting by calling his onetime partner a &#8220;swindler&#8221;.</p>



<p class="">Mr Altman then hit back in an interview with Bloomberg, opining that Mr Musk is not &#8220;a happy person&#8221; and saying his decisions are made from a &#8220;position of insecurity&#8221;.</p>



<p class="">The tit-for-tat is also playing out in court, where US district judge Yvonne Gonzalez Rogers is considering Mr Musk&#8217;s request for an injunction that would block OpenAI from its planned conversion.</p>



<p class="">He claims that he will be irreparably harmed without her intervention.</p>



<p class="">&#8220;It is plausible that what Mr Musk is saying is true. We&#8217;ll find out. He&#8217;ll sit on the stand,&#8221; Gonzalez Rogers said during a hearing in&nbsp;<em>Musk v Altman</em>&nbsp;earlier this month in Oakland, California.</p>



<p class="">According to OpenAI&#8217;s lawyers, Mr Musk&#8217;s recent bid contradicts his earlier claims that OpenAI&#8217;s assets cannot be transferred away for &#8220;private gain.&#8221;</p>



<p class="">&#8220;[O]ut of court, those constraints evidently do not apply, so long as Musk and his allies are the buyers,&#8221; their reply brief&nbsp;<a target="_blank" href="https://ecf.cand.uscourts.gov/cgi-bin/show_temp.pl?file=merged_14385_-1-1739515812.pdf&amp;type=application/pdf" rel="noreferrer noopener">states</a>.</p>



<p class="">Some observers say making a deal never appeared to be his goal.</p>



<p class="">&#8220;I think he&#8217;s just trying to create noise and news and consternation,&#8221; says Karl Freund, founder and principal analyst at Cambrian-AI.</p>



<p class="">But in addition to causing problems for his old rival, that strategy could inflict lasting damage on Mr Musk&#8217;s own reputation.</p>



<p class="">&#8220;He&#8217;s brilliant. He creates incredible companies that are doing incredible things. But his personal agenda is causing people to question his motives,&#8221; Mr Freund said.</p>
]]></content:encoded>
					
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">23874</post-id>	</item>
		<item>
		<title>India: Media pile into lawsuit against OpenAI chatbot ChatGPT</title>
		<link>https://news.mazzaltov.com/india-media-pile-into-lawsuit-against-openai-chatbot-chatgpt/?utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=india-media-pile-into-lawsuit-against-openai-chatbot-chatgpt</link>
		
		<dc:creator><![CDATA[Loneson Mondo]]></dc:creator>
		<pubDate>Thu, 06 Feb 2025 23:00:00 +0000</pubDate>
				<category><![CDATA[Asian News]]></category>
		<category><![CDATA[Tech News]]></category>
		<category><![CDATA[ChatGPT]]></category>
		<category><![CDATA[India]]></category>
		<category><![CDATA[OpenAI]]></category>
		<guid isPermaLink="false">https://news.mazzaltov.com/?p=22993</guid>

					<description><![CDATA[India&#8217;s biggest news organisations are seeking to join a lawsuit against OpenAI, the US startup behind ChatGPT, for alleged unauthorised use of their content. The news organisations include some of&#8230; ]]></description>
										<content:encoded><![CDATA[
<p class="">India&#8217;s biggest news organisations are seeking to join a lawsuit against OpenAI, the US startup behind ChatGPT, for alleged unauthorised use of their content.</p>



<p class="">The news organisations include some of India&#8217;s oldest publications like The Indian Express, The Hindu, The India Today group, billionaire Gautam Adani-owned NDTV, and over a dozen others.</p>



<p class="">OpenAI denies the allegations and told the BBC that it uses &#8220;publicly available data&#8221; that are in line with &#8220;widely accepted legal precedents&#8221;.</p>



<p class="">On Wednesday, OpenAI CEO Sam Altman was in Delhi to discuss India&#8217;s plan for a low-cost AI ecosystem with IT Minister Ashwini Vaishnaw.</p>



<p class="">He said India &#8220;should be one of the leaders of the AI revolution&#8221; and said earlier comments from 2023, when he said Indian firms would struggle to compete, had been taken out of context.</p>



<p class="">&#8220;India is an incredibly important market for AI in general and for OpenAI in particular,&#8221; local media quoted him as saying at the event.</p>



<p class="">The legal case filed against OpenAI in November by Asian News International (ANI), India&#8217;s largest news agency, is the first of its kind in India.</p>



<p class="">ANI accuses ChatGPT of using its copyrighted material illegally &#8211; which OpenAI denies &#8211; and is seeking damages of 20m rupees ($230,000; £185,000).</p>



<p class="">The case holds significance for ChatGPT given its plans to <a href="https://www.thehindubusinessline.com/info-tech/openai-eyes-india-opportunity-as-appetite-for-llms-surges/article69095768.ece" target="_blank" rel="noreferrer noopener">expand</a> in the country. According to a survey, India already has the largest user base of ChatGPT.</p>



<p class="">Chatbots like ChatGPT are trained on massive datasets collected by crawling through the internet. The content produced by nearly 450 news channels and 17,000 newspapers in India holds huge potential for this.</p>



<p class="">There is, however, no clarity on what material ChatGPT can legally collect and use for this purpose.</p>



<p class="">OpenAI is facing at least a dozen lawsuits across the world filed by publishers, artists and news organisations, who have all accused ChatGPT of using their content without permission.</p>



<p class="">The most prominent of them was filed by The New York Times in December 2023, in which the newspaper demanded &#8220;billions of dollars&#8221; in damages from OpenAI and Microsoft, its backer.</p>



<p class="">&#8220;A decision by any court would also hold some persuasive value for other similar cases around the world,&#8221; says Vibhav Mithal, a lawyer specialising in artificial intelligence at the Indian law firm Anand and Anand.</p>



<p class="">Mr Mithal said the verdict in the lawsuit filed by ANI could &#8220;define how these AI models will operate in the future&#8221; and &#8220;what copyrighted news content can be used to train AI generative models [like ChatGPT]&#8221;.</p>



<p class="">A court ruling in ANI&#8217;s favour could spark further legal cases as well as opening the possibility of AI companies entering into license sharing agreements with content creators, which some companies have already started doing.</p>



<p class="">&#8220;But a ruling in OpenAI&#8217;s favour will lead to more freedom to use copyrighted protected data to train AI models,&#8221; he said.</p>



<h2 class="wp-block-heading">What is ANI&#8217;s case?</h2>



<p class="">ANI provides news to its paying subscribers and owns exclusive copyright over a large archive of text, images and videos.</p>



<p class="">In its suit filed in the Delhi High Court, ANI says that OpenAI used its content to train ChatGPT without permission. ANI has argued that this led to the chatbot getting better and has profited OpenAI.</p>



<p class="">The news agency said that before filing the suit, it had told OpenAI its content was being used unlawfully and offered to grant the company a license to use its data.</p>



<p class="">ANI says OpenAI declined the offer and put the news agency on an internal blocklist so that its data is no longer collected. It also asked ANI to disable certain web crawlers to ensure that its content was not picked up by ChatGPT.</p>



<p class="">The news agency says that despite these measures, ChatGPT picks up its content from websites of its subscribers. This has enriched OpenAI &#8220;unjustly&#8221;, it says.</p>



<p class="">ANI also says in its suit that the chatbot produces its content verbatim for certain prompts. In some instances, ANI says, ChatGPT has falsely attributed statements to the news agency, hampering its credibility and misleading the public.</p>



<p class="">Apart from seeking compensation for damages, ANI has asked the court to direct OpenAI to stop storing and using its work.</p>



<p class="">In its response, OpenAI says it opposes the case being filed in India since the company and its servers are not located in the country and the chatbot has also not been trained there.</p>



<h2 class="wp-block-heading">News organisations seek to join lawsuit</h2>



<p class="">In December, the Federation of Indian Publishers, which claims to represent 80% of Indian publishers including the Indian offices of Penguin Random House and Oxford University Press, filed an application in court saying that they were &#8220;directly affected&#8221; by this case and should be allowed to present their arguments as well.</p>



<p class="">A month later, Digital News Publishers Association (DNPA), which represents leading Indian news outlets, and three other media outlets filed a similar application. They argued that while OpenAI had entered into licensing agreements with international news publishers such as the Associated Press and Financial Times, a similar model had not been followed in India.</p>



<p class="">DNPA told the court the case would affect the livelihood of journalists and the country&#8217;s entire news industry. OpenAI has, however, argued that chatbots are not a &#8220;substitute&#8221; for news subscriptions and are not used for such purposes.</p>



<p class="">The court has not admitted these applications by the publishers yet and OpenAI has argued that the court should not hear them.</p>



<p class="">But the judge clarified that even if these associations are allowed to argue, the court will restrict itself to ANI&#8217;s claims since the other parties had not filed their own lawsuits.</p>



<p class="">Meanwhile, OpenAI told the BBC it is engaging in &#8220;constructive partnerships and conversations&#8221; with news organisations around the world, including India, to &#8220;work collaboratively&#8221;.</p>



<h2 class="wp-block-heading">Where AI regulation in India stands</h2>



<p class="">Analysts say the lawsuits filed against ChatGPT across the world could bring into focus aspects of chatbots that have escaped scrutiny so far.</p>



<p class="">Dr Sivaramakrishnan R Guruvayur, whose research focuses on responsible use of artificial intelligence, says that the data used to train chatbots is one such aspect.</p>



<p class="">The ANI-OpenAI case will lead the court &#8220;to evaluate the data sources&#8221; of chatbots, he said.</p>



<p class="">Governments across the world have been grappling with how to regulate AI. In 2023, Italy blocked ChatGPT saying that the chatbot&#8217;s mass collection and storage of personal data raised privacy concerns.</p>



<p class="">The European Union approved a law to regulate AI last year.</p>



<p class="">The Indian government too has&nbsp;<a target="_blank" href="https://www.niti.gov.in/sites/default/files/2023-03/National-Strategy-for-Artificial-Intelligence.pdf" rel="noreferrer noopener">indicated</a>&nbsp;plans to regulate AI. Before the 2024 election, the government issued an advisory that AI tools that were &#8220;under-testing&#8221; or &#8220;unreliable&#8221; should get government permission before launching.</p>



<p class="">It also asked AI tools to not generate responses that are illegal in India or &#8220;threaten the integrity of the electoral process&#8221;.</p>
]]></content:encoded>
					
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">22993</post-id>	</item>
		<item>
		<title>USA: OpenAI says Chinese rivals using its work for their AI apps</title>
		<link>https://news.mazzaltov.com/usa-openai-says-chinese-rivals-using-its-work-for-their-ai-apps/?utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=usa-openai-says-chinese-rivals-using-its-work-for-their-ai-apps</link>
		
		<dc:creator><![CDATA[Loneson Mondo]]></dc:creator>
		<pubDate>Thu, 30 Jan 2025 19:00:00 +0000</pubDate>
				<category><![CDATA[USA News]]></category>
		<category><![CDATA[AI]]></category>
		<category><![CDATA[ChatGPT]]></category>
		<category><![CDATA[China]]></category>
		<category><![CDATA[DeepSeek]]></category>
		<category><![CDATA[OpenAI]]></category>
		<category><![CDATA[USA]]></category>
		<guid isPermaLink="false">https://news.mazzaltov.com/?p=22397</guid>

					<description><![CDATA[The maker of ChatGPT, OpenAI, has complained that rivals, including those in China, are using its work to make rapid advances in developing their own artificial intelligence (AI) tools. The&#8230; ]]></description>
										<content:encoded><![CDATA[
<p class="">The maker of ChatGPT, OpenAI, has complained that rivals, including those in China, are using its work to make rapid advances in developing their own artificial intelligence (AI) tools.</p>



<p class="">The status of OpenAI &#8211; and other US firms &#8211; as the world leaders in AI&nbsp;<a href="https://www.bbc.co.uk/news/articles/c0qw7z2v1pgo">has been dramatically undermined this week</a>&nbsp;by the sudden emergence of DeepSeek, a Chinese app that can emulate the performance of ChatGPT, apparently at a fraction of the cost.</p>



<p class="">Bloomberg has reported that Microsoft is investigating whether data belonging to OpenAI &#8211; which it is a major investor in &#8211; has been used in an unauthorised way.</p>



<p class="">The BBC has contacted Microsoft and DeepSeek for comment.</p>



<p class="">OpenAI&#8217;s concerns have been echoed by the recently appointed White House &#8220;AI and crypto czar&#8221;, David Sacks.</p>



<p class="">Speaking on Fox News, he suggested that DeepSeek may have used the models developed by OpenAI to get better, a process called knowledge distillation.</p>



<p class="">&#8220;There&#8217;s substantial evidence that what DeepSeek did here is they distilled the knowledge out of OpenAI&#8217;s models,&#8221; Mr Sacks said.</p>



<p class="">&#8220;I think one of the things you&#8217;re going to see over the next few months is our leading AI companies taking steps to try and prevent distillation&#8230; That would definitely slow down some of these copycat models.&#8221;</p>



<p class="">In a statement, OpenAI said Chinese and other companies were &#8220;constantly trying to distil the models of leading US AI companies&#8221;.</p>



<p class="">&#8220;As we go forward&#8230;it is critically important that we are working closely with the U.S. government to best protect the most capable models,&#8221; it added.</p>



<p class="">The accusation of disrespecting intellectual property rights is however far from a new one in tech &#8211; and has been frequently levelled at major US AI firms.</p>



<p class="">US officials are also considering the national security implications of DeepSeek&#8217;s emergence, according to White House press secretary Karoline Leavitt.</p>



<p class="">&#8220;I spoke with [the National Security Council] this morning, they are looking into what [the national security implications] may be,&#8221; said Ms Leavitt, who also restated US President Donald Trump&#8217;s remarks a day earlier that DeepSeek should be a wake-up call for the US tech industry.</p>



<p class="">The announcement comes after the US navy reportedly banned its members from using DeepSeek&#8217;s apps due to &#8220;potential security and ethical concerns&#8221;.</p>



<p class="">According to CNBC, the US navy has sent an email to its staff warning them not to use the DeepSeek app due to &#8220;potential security and ethical concerns associated with the model&#8217;s origin and usage&#8221;.</p>



<p class="">The Navy did not immediately respond to a request for comment from BBC News.</p>



<p class="">Data safety experts have warned users to be careful with the tool, given it collects large amounts of personal data and stores it in servers in China.</p>



<p class="">Meanwhile, DeepSeek says it has been the target of cyber attacks. On Monday it said it would temporarily limit registrations because of &#8220;large-scale malicious attacks&#8221; on its software.</p>



<p class="">A banner showing on the company&#8217;s website says registration may be busy as a result of the attacks.</p>



<p class="">Yuyuan Tantian, a social media channel under China&#8217;s state broadcaster CCTV, claims the firm has faced &#8220;several&#8221; cyber attacks in recent weeks, which have increased in &#8220;intensity&#8221;.</p>



<p class="">America&#8217;s AI industry has been shaken by DeepSeek&#8217;s apparent breakthrough, especially because of the prevailing view that the US was far ahead in the race.</p>



<p class="">A slew of trade restrictions banning China&#8217;s access to high-end chips was believed to have cemented this.</p>



<p class="">Although China has boosted investment in advanced tech to diversify its economy, DeepSeek is not one of the big Chinese firms that have been developing AI models to rival US-made ChatGPT.</p>



<p class="">Experts say the US still has an advantage &#8211; it is home to some of the biggest chip companies &#8211; and that it&#8217;s unclear yet exactly how DeepSeek built its model and how far it can go.</p>



<p class=""></p>
]]></content:encoded>
					
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">22397</post-id>	</item>
		<item>
		<title>USA: OpenAI boss Sam Altman denies sister&#8217;s allegations of childhood rape</title>
		<link>https://news.mazzaltov.com/usa-openai-boss-sam-altman-denies-sisters-allegations-of-childhood-rape/?utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=usa-openai-boss-sam-altman-denies-sisters-allegations-of-childhood-rape</link>
		
		<dc:creator><![CDATA[Loneson Mondo]]></dc:creator>
		<pubDate>Wed, 08 Jan 2025 21:00:00 +0000</pubDate>
				<category><![CDATA[Mazzaltov News]]></category>
		<category><![CDATA[OpenAI]]></category>
		<category><![CDATA[Rape]]></category>
		<category><![CDATA[Sam Altman]]></category>
		<guid isPermaLink="false">https://news.mazzaltov.com/?p=20822</guid>

					<description><![CDATA[OpenAI chief executive Sam Altman&#8217;s sister, Ann Altman, has filed a lawsuit alleging that he regularly sexually abused her between 1997 and 2006. The lawsuit, which was filed on 6&#8230; ]]></description>
										<content:encoded><![CDATA[
<p class="">OpenAI chief executive Sam Altman&#8217;s sister, Ann Altman, has filed a lawsuit alleging that he regularly sexually abused her between 1997 and 2006.</p>



<p class="">The lawsuit, which was filed on 6 January in a US District Court in the Eastern District of Missouri, alleges that the abuse started when she was three and Mr Altman was 12.</p>



<p class="">In a joint statement on X, with his mother and two brothers, Mr Altman denied the allegations, saying &#8220;all of these claims are utterly untrue.&#8221;</p>



<p class="">&#8220;Caring for a family member who faces mental health challenges is incredibly difficult,&#8221; the statement added.</p>



<p class="">&#8220;This situation causes immense pain to our entire family.&#8221;In the filing, which has been seen by the BBC, Ms Altman alleged that the abuse, which took place over many years, included rape.</p>



<p class="">The lawsuit added the last instance of the alleged abuse took place when Mr Altman was an adult but she was still a minor.</p>



<p class="">The lawsuit requested a jury trial and damages in excess of $75,000 (£60,100).</p>



<p class="">Ms Altman has previously made similar allegations against her brother on social media platforms such as X.</p>



<p class="">Mr Altman is one of the technology world&#8217;s most high profile figures.In late 2022, OpenAI launched the ChatGPT generative artificial intelligence (AI) chatbot.</p>
]]></content:encoded>
					
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">20822</post-id>	</item>
		<item>
		<title>USA: OpenAI whistleblower Suchir Balaji found dead in San Francisco apartment</title>
		<link>https://news.mazzaltov.com/usa-openai-whistleblower-suchir-balaji-found-dead-in-san-francisco-apartment/?utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=usa-openai-whistleblower-suchir-balaji-found-dead-in-san-francisco-apartment</link>
		
		<dc:creator><![CDATA[Loneson Mondo]]></dc:creator>
		<pubDate>Mon, 16 Dec 2024 01:00:00 +0000</pubDate>
				<category><![CDATA[Business News]]></category>
		<category><![CDATA[OpenAI]]></category>
		<category><![CDATA[USA]]></category>
		<guid isPermaLink="false">https://news.mazzaltov.com/?p=18656</guid>

					<description><![CDATA[An OpenAI researcher-turned-whistleblower has been found dead in an apartment in San Francisco, authorities said. The body of Suchir Balaji, 26, was discovered on 26 November after police said they&#8230; ]]></description>
										<content:encoded><![CDATA[
<p class="">An OpenAI researcher-turned-whistleblower has been found dead in an apartment in San Francisco, authorities said.</p>



<p class="">The body of Suchir Balaji, 26, was discovered on 26 November after police said they received a call asking officers to check on his wellbeing.</p>



<p class="">The San Francisco medical examiner&#8217;s office determined his death to be suicide and police found no evidence of foul play.</p>



<p class="">In recent months Mr Balaji had publicly spoken out against artificial intelligence company OpenAI&#8217;s practices, which has been fighting a number of lawsuits relating to its data-gathering practices.</p>



<p class="">In October, the New York Times published an interview with Mr Balaji in which he alleged that OpenAI had violated US copyright law while developing its popular ChatGPT online chatbot.</p>



<p class="">The article said that after working at the company for four years as a researcher, Mr Balaji had come to the conclusion that &#8220;OpenAI&#8217;s use of copyrighted data to build ChatGPT violated the law and that technologies like ChatGPT were damaging the internet&#8221;.</p>



<p class="">OpenAI says its models are &#8220;trained on publicly available data&#8221;.Mr Balaji left the company in August, telling the New York Times he had since been working on personal projects.</p>



<p class="">He grew up in Cupertino, California, before going to study computer science at the University of California, Berkeley.</p>



<p class="">A spokesperson for OpenAI said in a statement cited by CNBC News that it was &#8220;devastated to learn of this incredibly sad news today and our hearts go out to Suchir&#8217;s loved ones during this difficult time&#8221;.</p>



<p class="">US and Canadian news publishers, including the New York Times, and a group of best-selling writers, including John Grisham, have filed lawsuits claiming the company was illegally using news articles to train its software.</p>



<p class="">OpenAI told the BBC in November its software is &#8220;grounded in fair use and related international copyright principles that are fair for creators and support innovation&#8221;.</p>



<p class=""></p>
]]></content:encoded>
					
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">18656</post-id>	</item>
		<item>
		<title>Canada: Major news outlets sue OpenAI</title>
		<link>https://news.mazzaltov.com/canada-major-news-outlets-sue-openai/?utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=canada-major-news-outlets-sue-openai</link>
		
		<dc:creator><![CDATA[Loneson Mondo]]></dc:creator>
		<pubDate>Sun, 01 Dec 2024 01:00:00 +0000</pubDate>
				<category><![CDATA[Tech News]]></category>
		<category><![CDATA[Artificial intelligence]]></category>
		<category><![CDATA[Canada]]></category>
		<category><![CDATA[ChatGPT]]></category>
		<category><![CDATA[OpenAI]]></category>
		<guid isPermaLink="false">https://news.mazzaltov.com/?p=17172</guid>

					<description><![CDATA[A coalition of Canada&#8217;s biggest news outlets is suing OpenAI, the maker of artificial intelligence chatbot ChatGPT, alleging the company is illegally using news articles to train its software. News&#8230; ]]></description>
										<content:encoded><![CDATA[
<p class="">A coalition of Canada&#8217;s biggest news outlets is suing OpenAI, the maker of artificial intelligence chatbot ChatGPT, alleging the company is illegally using news articles to train its software.</p>



<p class="">News organisations including the Toronto Star, Metroland Media, Postmedia, The Globe and Mail, The Canadian Press and CBC have all joined the suit, reportedly the first of its kind in the country.“</p>



<p class="">Journalism is in the public interest. OpenAI using other companies’ journalism for their own commercial gain is not. It’s illegal,” the media organisations said in a joint statement.</p>



<p class="">OpenAI says its models are &#8220;trained on publicly available data&#8221;.</p>



<p class="">The software is &#8220;grounded in fair use and related international copyright principles that are fair for creators and support innovation&#8221;, the company.</p>



<p class="">&#8220;We collaborate closely with news publishers, including in the display, attribution and links to their content in ChatGPT search, and offer them easy ways to opt out should they so desire.&#8221;</p>



<p class="">In its 84-page filing, the Canadian media coalition accuses OpenAI of ignoring safeguards like paywalls or copyright disclaimers meant to prevent the unauthorised copying of content.</p>



<p class="">&#8220;OpenAI regularly breaches copyright and online terms of use by scraping large swaths of content from Canadian media to help develop its products, such as ChatGPT,” the companies said.</p>



<p class="">The group, which includes the publishers of Canada&#8217;s top newspapers, is seeking punitive damages of C$20,000 ($14,300; £11,000) per article they allege was used to illegally train ChatGPT &#8211; a sum that could add up to billions of dollars in compensation.</p>



<p class="">The news organisations are also requesting an order that would force the company to share profits made from using their articles, as well as an injunction prohibiting OpenAI from using them in future.</p>



<p class="">While the lawsuit against OpenAI is a first for Canadian publishers, it follows a similar action in the US launched by the New York Times and other publishers last year. In April, lawyers for the Times accused OpenAI of erasing evidence they needed for trial.</p>



<p class="">In another case, the Authors Guild and a group of major writers including John Grisham also claimed copyright infringement.</p>



<p class="">Earlier this week, the Wall Street Journal reported that OpenAI was valued at C$219bn after its latest round of fundraising from investors.</p>
]]></content:encoded>
					
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">17172</post-id>	</item>
	</channel>
</rss>

<!--
Performance optimized by W3 Total Cache. Learn more: https://www.boldgrid.com/w3-total-cache/?utm_source=w3tc&utm_medium=footer_comment&utm_campaign=free_plugin


Served from: news.mazzaltov.com @ 2026-04-24 20:22:00 by W3 Total Cache
-->