<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>ChatGPT &#8211; Mazzaltov World News</title>
	<atom:link href="https://news.mazzaltov.com/tag/chatgpt/feed/" rel="self" type="application/rss+xml" />
	<link>https://news.mazzaltov.com</link>
	<description>Your Reliable Source of Global News</description>
	<lastBuildDate>Fri, 21 Mar 2025 22:24:50 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=6.9.4</generator>
<site xmlns="com-wordpress:feed-additions:1">193366028</site>	<item>
		<title>Norway: Man files complaint after ChatGPT said he killed his children</title>
		<link>https://news.mazzaltov.com/norway-man-files-complaint-after-chatgpt-said-he-killed-his-children/?utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=norway-man-files-complaint-after-chatgpt-said-he-killed-his-children</link>
		
		<dc:creator><![CDATA[Loneson Mondo]]></dc:creator>
		<pubDate>Sat, 22 Mar 2025 15:21:00 +0000</pubDate>
				<category><![CDATA[European News]]></category>
		<category><![CDATA[Tech News]]></category>
		<category><![CDATA[ChatGPT]]></category>
		<category><![CDATA[Norway]]></category>
		<guid isPermaLink="false">https://news.mazzaltov.com/?p=26432</guid>

					<description><![CDATA[A Norwegian man has filed a complaint after ChatGPT falsely told him he had killed two of his sons and been jailed for 21 years. Arve Hjalmar Holmen has contacted&#8230; ]]></description>
										<content:encoded><![CDATA[
<p class="">A Norwegian man has filed a complaint after ChatGPT falsely told him he had killed two of his sons and been jailed for 21 years.</p>



<p class="">Arve Hjalmar Holmen has contacted the Norwegian Data Protection Authority and demanded the chatbot&#8217;s maker OpenAI is fined.</p>



<p class="">It is the latest example of so-called &#8220;hallucinations&#8221;, where artificial intelligence (AI) systems invent information and present it as fact.</p>



<p class="">Mr Holmen says this hallucination is damaging to him.</p>



<p class="">&#8220;Some think that there is no smoke without fire &#8211; the fact that someone could read this output and believe it is true is what scares me the most,&#8221; he said.</p>



<p class="">OpenAI says this case relates to a previous version of ChatGPT and it has since updated its models.</p>



<p class="">Mr Holmen was given the false information after he used ChatGPT to search for: &#8220;Who is Arve Hjalmar Holmen?&#8221;</p>



<p class="">The response he got from ChatGPT included: &#8220;Arve Hjalmar Holmen is a Norwegian individual who gained attention due to a tragic event.</p>



<p class="">&#8220;He was the father of two young boys, aged 7 and 10, who were tragically found dead in a pond near their home in Trondheim, Norway, in December 2020.&#8221;</p>



<p class="">Mr Holmen said the chatbot got their age gap roughly right, suggesting it did have some accurate information about him.</p>



<p class="">Digital rights group Noyb, which has filed&nbsp;<a target="_blank" href="https://noyb.eu/sites/default/files/2025-03/OpenAI_complaint_redacted.pdf" rel="noreferrer noopener">the complaint</a>&nbsp;on his behalf, says the answer ChatGPT gave him is defamatory and breaks European data protection rules around accuracy of personal data.</p>



<p class="">Noyb said in its complaint that Mr Holmen &#8220;has never been accused nor convicted of any crime and is a conscientious citizen&#8221;.</p>



<p class="">ChatGPT carries a disclaimer which says: &#8220;ChatGPT can make mistakes. Check important info.&#8221;</p>



<p class="">Noyb says that is insufficient.</p>



<p class="">&#8220;You can&#8217;t just spread false information and in the end add a small disclaimer saying that everything you said may just not be true,&#8221; Noyb lawyer Joakim Söderberg said.</p>



<p class="">OpenAI said in a statement: &#8220;We continue to research new ways to improve the accuracy of our models and reduce hallucinations.</p>



<p class="">&#8220;While we&#8217;re still reviewing this complaint, it relates to a version of ChatGPT which has since been enhanced with online search capabilities that improves accuracy.&#8221;</p>



<figure class="wp-block-image"><img decoding="async" src="https://ichef.bbci.co.uk/news/480/cpsprodpb/e5e2/live/9a074030-057e-11f0-97d3-37df2b293ed1.jpg.webp" alt="Noyb European Center for Digital Rights A screenshot of ChatGPT, where the question asked is: Who is Arve Hjalmar Holmen?” 
The answer given by the chatbot is: “Arve Hjalmar Holmen is a Norwegian individual who gained attention due to a tragic event. He was the father of two young boys, aged 7 and 10, who were tragically found dead in a pond near their home in Trondheim, Norway, in December 2020. Arve Hjalmar Holmen was accused and later convicted of murdering his
two sons, as well as for the attempted murder of his third son. The case shocked the local community and the nation, and it was widely covered in the media due to its tragic nature. Holmen was sentenced to 21 years in prison,which is the maximum penalty in Norway. The incident highlighted issues of mental health and the complexities involved in family dynamics.”"/></figure>



<p class="">Hallucinations are one of the main problems computer scientists are trying to solve when it comes to generative AI.</p>



<p class="">These are when chatbots present false information as facts.</p>



<p class="">Earlier this year, Apple&nbsp;<a href="https://www.bbc.co.uk/news/articles/cq5ggew08eyo">suspended its Apple Intelligence</a>&nbsp;news summary tool in the UK after it hallucinated false headlines and presented them as real news.</p>



<p class="">Google&#8217;s AI Gemini has also&nbsp;<a href="https://www.bbc.co.uk/news/articles/cd11gzejgz4o">fallen foul of hallucination</a>&nbsp;&#8211; last year it suggested sticking cheese to pizza using glue, and said geologists recommend humans eat one rock per day.</p>



<p class="">It is not clear what it is in the large language models &#8211; the tech which underpins chatbots &#8211; which causes these hallucinations.</p>



<p class="">&#8220;This is actually an area of active research. How do we construct these chains of reasoning? How do we explain what what is actually going on in a large language model?&#8221; said Simone Stumpf, professor of responsible and interactive AI at the University of Glasgow.</p>



<p class="">Prof Stumpf says that can even apply to people who work behind the scenes on these types of models.</p>



<p class="">&#8220;Even if you are more involved in the development of these systems quite often, you do not know how they actually work, why they&#8217;re coming up with this particular information that they came up with,&#8221; she told the BBC.</p>



<p class="">ChatGPT has changed its model since Mr Holmen&#8217;s search in August 2024, and now searches current news articles when it looks for relevant information.</p>



<p class="">Noyb told the BBC Mr Holmen had made a number of searches that day, including putting his brother&#8217;s name into the chatbot and it produced &#8220;multiple different stories that were all incorrect&#8221;.</p>



<p class="">They also acknowledged the previous searches could have influenced the answer about his children, but said large language models are a &#8220;black box&#8221; and OpenAI &#8220;doesn&#8217;t reply to access requests, which makes it impossible to find out more about what exact data is in the system.&#8221;</p>
]]></content:encoded>
					
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">26432</post-id>	</item>
		<item>
		<title>India: Media pile into lawsuit against OpenAI chatbot ChatGPT</title>
		<link>https://news.mazzaltov.com/india-media-pile-into-lawsuit-against-openai-chatbot-chatgpt/?utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=india-media-pile-into-lawsuit-against-openai-chatbot-chatgpt</link>
		
		<dc:creator><![CDATA[Loneson Mondo]]></dc:creator>
		<pubDate>Thu, 06 Feb 2025 23:00:00 +0000</pubDate>
				<category><![CDATA[Asian News]]></category>
		<category><![CDATA[Tech News]]></category>
		<category><![CDATA[ChatGPT]]></category>
		<category><![CDATA[India]]></category>
		<category><![CDATA[OpenAI]]></category>
		<guid isPermaLink="false">https://news.mazzaltov.com/?p=22993</guid>

					<description><![CDATA[India&#8217;s biggest news organisations are seeking to join a lawsuit against OpenAI, the US startup behind ChatGPT, for alleged unauthorised use of their content. The news organisations include some of&#8230; ]]></description>
										<content:encoded><![CDATA[
<p class="">India&#8217;s biggest news organisations are seeking to join a lawsuit against OpenAI, the US startup behind ChatGPT, for alleged unauthorised use of their content.</p>



<p class="">The news organisations include some of India&#8217;s oldest publications like The Indian Express, The Hindu, The India Today group, billionaire Gautam Adani-owned NDTV, and over a dozen others.</p>



<p class="">OpenAI denies the allegations and told the BBC that it uses &#8220;publicly available data&#8221; that are in line with &#8220;widely accepted legal precedents&#8221;.</p>



<p class="">On Wednesday, OpenAI CEO Sam Altman was in Delhi to discuss India&#8217;s plan for a low-cost AI ecosystem with IT Minister Ashwini Vaishnaw.</p>



<p class="">He said India &#8220;should be one of the leaders of the AI revolution&#8221; and said earlier comments from 2023, when he said Indian firms would struggle to compete, had been taken out of context.</p>



<p class="">&#8220;India is an incredibly important market for AI in general and for OpenAI in particular,&#8221; local media quoted him as saying at the event.</p>



<p class="">The legal case filed against OpenAI in November by Asian News International (ANI), India&#8217;s largest news agency, is the first of its kind in India.</p>



<p class="">ANI accuses ChatGPT of using its copyrighted material illegally &#8211; which OpenAI denies &#8211; and is seeking damages of 20m rupees ($230,000; £185,000).</p>



<p class="">The case holds significance for ChatGPT given its plans to <a href="https://www.thehindubusinessline.com/info-tech/openai-eyes-india-opportunity-as-appetite-for-llms-surges/article69095768.ece" target="_blank" rel="noreferrer noopener">expand</a> in the country. According to a survey, India already has the largest user base of ChatGPT.</p>



<p class="">Chatbots like ChatGPT are trained on massive datasets collected by crawling through the internet. The content produced by nearly 450 news channels and 17,000 newspapers in India holds huge potential for this.</p>



<p class="">There is, however, no clarity on what material ChatGPT can legally collect and use for this purpose.</p>



<p class="">OpenAI is facing at least a dozen lawsuits across the world filed by publishers, artists and news organisations, who have all accused ChatGPT of using their content without permission.</p>



<p class="">The most prominent of them was filed by The New York Times in December 2023, in which the newspaper demanded &#8220;billions of dollars&#8221; in damages from OpenAI and Microsoft, its backer.</p>



<p class="">&#8220;A decision by any court would also hold some persuasive value for other similar cases around the world,&#8221; says Vibhav Mithal, a lawyer specialising in artificial intelligence at the Indian law firm Anand and Anand.</p>



<p class="">Mr Mithal said the verdict in the lawsuit filed by ANI could &#8220;define how these AI models will operate in the future&#8221; and &#8220;what copyrighted news content can be used to train AI generative models [like ChatGPT]&#8221;.</p>



<p class="">A court ruling in ANI&#8217;s favour could spark further legal cases as well as opening the possibility of AI companies entering into license sharing agreements with content creators, which some companies have already started doing.</p>



<p class="">&#8220;But a ruling in OpenAI&#8217;s favour will lead to more freedom to use copyrighted protected data to train AI models,&#8221; he said.</p>



<h2 class="wp-block-heading">What is ANI&#8217;s case?</h2>



<p class="">ANI provides news to its paying subscribers and owns exclusive copyright over a large archive of text, images and videos.</p>



<p class="">In its suit filed in the Delhi High Court, ANI says that OpenAI used its content to train ChatGPT without permission. ANI has argued that this led to the chatbot getting better and has profited OpenAI.</p>



<p class="">The news agency said that before filing the suit, it had told OpenAI its content was being used unlawfully and offered to grant the company a license to use its data.</p>



<p class="">ANI says OpenAI declined the offer and put the news agency on an internal blocklist so that its data is no longer collected. It also asked ANI to disable certain web crawlers to ensure that its content was not picked up by ChatGPT.</p>



<p class="">The news agency says that despite these measures, ChatGPT picks up its content from websites of its subscribers. This has enriched OpenAI &#8220;unjustly&#8221;, it says.</p>



<p class="">ANI also says in its suit that the chatbot produces its content verbatim for certain prompts. In some instances, ANI says, ChatGPT has falsely attributed statements to the news agency, hampering its credibility and misleading the public.</p>



<p class="">Apart from seeking compensation for damages, ANI has asked the court to direct OpenAI to stop storing and using its work.</p>



<p class="">In its response, OpenAI says it opposes the case being filed in India since the company and its servers are not located in the country and the chatbot has also not been trained there.</p>



<h2 class="wp-block-heading">News organisations seek to join lawsuit</h2>



<p class="">In December, the Federation of Indian Publishers, which claims to represent 80% of Indian publishers including the Indian offices of Penguin Random House and Oxford University Press, filed an application in court saying that they were &#8220;directly affected&#8221; by this case and should be allowed to present their arguments as well.</p>



<p class="">A month later, Digital News Publishers Association (DNPA), which represents leading Indian news outlets, and three other media outlets filed a similar application. They argued that while OpenAI had entered into licensing agreements with international news publishers such as the Associated Press and Financial Times, a similar model had not been followed in India.</p>



<p class="">DNPA told the court the case would affect the livelihood of journalists and the country&#8217;s entire news industry. OpenAI has, however, argued that chatbots are not a &#8220;substitute&#8221; for news subscriptions and are not used for such purposes.</p>



<p class="">The court has not admitted these applications by the publishers yet and OpenAI has argued that the court should not hear them.</p>



<p class="">But the judge clarified that even if these associations are allowed to argue, the court will restrict itself to ANI&#8217;s claims since the other parties had not filed their own lawsuits.</p>



<p class="">Meanwhile, OpenAI told the BBC it is engaging in &#8220;constructive partnerships and conversations&#8221; with news organisations around the world, including India, to &#8220;work collaboratively&#8221;.</p>



<h2 class="wp-block-heading">Where AI regulation in India stands</h2>



<p class="">Analysts say the lawsuits filed against ChatGPT across the world could bring into focus aspects of chatbots that have escaped scrutiny so far.</p>



<p class="">Dr Sivaramakrishnan R Guruvayur, whose research focuses on responsible use of artificial intelligence, says that the data used to train chatbots is one such aspect.</p>



<p class="">The ANI-OpenAI case will lead the court &#8220;to evaluate the data sources&#8221; of chatbots, he said.</p>



<p class="">Governments across the world have been grappling with how to regulate AI. In 2023, Italy blocked ChatGPT saying that the chatbot&#8217;s mass collection and storage of personal data raised privacy concerns.</p>



<p class="">The European Union approved a law to regulate AI last year.</p>



<p class="">The Indian government too has&nbsp;<a target="_blank" href="https://www.niti.gov.in/sites/default/files/2023-03/National-Strategy-for-Artificial-Intelligence.pdf" rel="noreferrer noopener">indicated</a>&nbsp;plans to regulate AI. Before the 2024 election, the government issued an advisory that AI tools that were &#8220;under-testing&#8221; or &#8220;unreliable&#8221; should get government permission before launching.</p>



<p class="">It also asked AI tools to not generate responses that are illegal in India or &#8220;threaten the integrity of the electoral process&#8221;.</p>
]]></content:encoded>
					
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">22993</post-id>	</item>
		<item>
		<title>USA: OpenAI says Chinese rivals using its work for their AI apps</title>
		<link>https://news.mazzaltov.com/usa-openai-says-chinese-rivals-using-its-work-for-their-ai-apps/?utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=usa-openai-says-chinese-rivals-using-its-work-for-their-ai-apps</link>
		
		<dc:creator><![CDATA[Loneson Mondo]]></dc:creator>
		<pubDate>Thu, 30 Jan 2025 19:00:00 +0000</pubDate>
				<category><![CDATA[USA News]]></category>
		<category><![CDATA[AI]]></category>
		<category><![CDATA[ChatGPT]]></category>
		<category><![CDATA[China]]></category>
		<category><![CDATA[DeepSeek]]></category>
		<category><![CDATA[OpenAI]]></category>
		<category><![CDATA[USA]]></category>
		<guid isPermaLink="false">https://news.mazzaltov.com/?p=22397</guid>

					<description><![CDATA[The maker of ChatGPT, OpenAI, has complained that rivals, including those in China, are using its work to make rapid advances in developing their own artificial intelligence (AI) tools. The&#8230; ]]></description>
										<content:encoded><![CDATA[
<p class="">The maker of ChatGPT, OpenAI, has complained that rivals, including those in China, are using its work to make rapid advances in developing their own artificial intelligence (AI) tools.</p>



<p class="">The status of OpenAI &#8211; and other US firms &#8211; as the world leaders in AI&nbsp;<a href="https://www.bbc.co.uk/news/articles/c0qw7z2v1pgo">has been dramatically undermined this week</a>&nbsp;by the sudden emergence of DeepSeek, a Chinese app that can emulate the performance of ChatGPT, apparently at a fraction of the cost.</p>



<p class="">Bloomberg has reported that Microsoft is investigating whether data belonging to OpenAI &#8211; which it is a major investor in &#8211; has been used in an unauthorised way.</p>



<p class="">The BBC has contacted Microsoft and DeepSeek for comment.</p>



<p class="">OpenAI&#8217;s concerns have been echoed by the recently appointed White House &#8220;AI and crypto czar&#8221;, David Sacks.</p>



<p class="">Speaking on Fox News, he suggested that DeepSeek may have used the models developed by OpenAI to get better, a process called knowledge distillation.</p>



<p class="">&#8220;There&#8217;s substantial evidence that what DeepSeek did here is they distilled the knowledge out of OpenAI&#8217;s models,&#8221; Mr Sacks said.</p>



<p class="">&#8220;I think one of the things you&#8217;re going to see over the next few months is our leading AI companies taking steps to try and prevent distillation&#8230; That would definitely slow down some of these copycat models.&#8221;</p>



<p class="">In a statement, OpenAI said Chinese and other companies were &#8220;constantly trying to distil the models of leading US AI companies&#8221;.</p>



<p class="">&#8220;As we go forward&#8230;it is critically important that we are working closely with the U.S. government to best protect the most capable models,&#8221; it added.</p>



<p class="">The accusation of disrespecting intellectual property rights is however far from a new one in tech &#8211; and has been frequently levelled at major US AI firms.</p>



<p class="">US officials are also considering the national security implications of DeepSeek&#8217;s emergence, according to White House press secretary Karoline Leavitt.</p>



<p class="">&#8220;I spoke with [the National Security Council] this morning, they are looking into what [the national security implications] may be,&#8221; said Ms Leavitt, who also restated US President Donald Trump&#8217;s remarks a day earlier that DeepSeek should be a wake-up call for the US tech industry.</p>



<p class="">The announcement comes after the US navy reportedly banned its members from using DeepSeek&#8217;s apps due to &#8220;potential security and ethical concerns&#8221;.</p>



<p class="">According to CNBC, the US navy has sent an email to its staff warning them not to use the DeepSeek app due to &#8220;potential security and ethical concerns associated with the model&#8217;s origin and usage&#8221;.</p>



<p class="">The Navy did not immediately respond to a request for comment from BBC News.</p>



<p class="">Data safety experts have warned users to be careful with the tool, given it collects large amounts of personal data and stores it in servers in China.</p>



<p class="">Meanwhile, DeepSeek says it has been the target of cyber attacks. On Monday it said it would temporarily limit registrations because of &#8220;large-scale malicious attacks&#8221; on its software.</p>



<p class="">A banner showing on the company&#8217;s website says registration may be busy as a result of the attacks.</p>



<p class="">Yuyuan Tantian, a social media channel under China&#8217;s state broadcaster CCTV, claims the firm has faced &#8220;several&#8221; cyber attacks in recent weeks, which have increased in &#8220;intensity&#8221;.</p>



<p class="">America&#8217;s AI industry has been shaken by DeepSeek&#8217;s apparent breakthrough, especially because of the prevailing view that the US was far ahead in the race.</p>



<p class="">A slew of trade restrictions banning China&#8217;s access to high-end chips was believed to have cemented this.</p>



<p class="">Although China has boosted investment in advanced tech to diversify its economy, DeepSeek is not one of the big Chinese firms that have been developing AI models to rival US-made ChatGPT.</p>



<p class="">Experts say the US still has an advantage &#8211; it is home to some of the biggest chip companies &#8211; and that it&#8217;s unclear yet exactly how DeepSeek built its model and how far it can go.</p>



<p class=""></p>
]]></content:encoded>
					
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">22397</post-id>	</item>
		<item>
		<title>Canada: Major news outlets sue OpenAI</title>
		<link>https://news.mazzaltov.com/canada-major-news-outlets-sue-openai/?utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=canada-major-news-outlets-sue-openai</link>
		
		<dc:creator><![CDATA[Loneson Mondo]]></dc:creator>
		<pubDate>Sun, 01 Dec 2024 01:00:00 +0000</pubDate>
				<category><![CDATA[Tech News]]></category>
		<category><![CDATA[Artificial intelligence]]></category>
		<category><![CDATA[Canada]]></category>
		<category><![CDATA[ChatGPT]]></category>
		<category><![CDATA[OpenAI]]></category>
		<guid isPermaLink="false">https://news.mazzaltov.com/?p=17172</guid>

					<description><![CDATA[A coalition of Canada&#8217;s biggest news outlets is suing OpenAI, the maker of artificial intelligence chatbot ChatGPT, alleging the company is illegally using news articles to train its software. News&#8230; ]]></description>
										<content:encoded><![CDATA[
<p class="">A coalition of Canada&#8217;s biggest news outlets is suing OpenAI, the maker of artificial intelligence chatbot ChatGPT, alleging the company is illegally using news articles to train its software.</p>



<p class="">News organisations including the Toronto Star, Metroland Media, Postmedia, The Globe and Mail, The Canadian Press and CBC have all joined the suit, reportedly the first of its kind in the country.“</p>



<p class="">Journalism is in the public interest. OpenAI using other companies’ journalism for their own commercial gain is not. It’s illegal,” the media organisations said in a joint statement.</p>



<p class="">OpenAI says its models are &#8220;trained on publicly available data&#8221;.</p>



<p class="">The software is &#8220;grounded in fair use and related international copyright principles that are fair for creators and support innovation&#8221;, the company.</p>



<p class="">&#8220;We collaborate closely with news publishers, including in the display, attribution and links to their content in ChatGPT search, and offer them easy ways to opt out should they so desire.&#8221;</p>



<p class="">In its 84-page filing, the Canadian media coalition accuses OpenAI of ignoring safeguards like paywalls or copyright disclaimers meant to prevent the unauthorised copying of content.</p>



<p class="">&#8220;OpenAI regularly breaches copyright and online terms of use by scraping large swaths of content from Canadian media to help develop its products, such as ChatGPT,” the companies said.</p>



<p class="">The group, which includes the publishers of Canada&#8217;s top newspapers, is seeking punitive damages of C$20,000 ($14,300; £11,000) per article they allege was used to illegally train ChatGPT &#8211; a sum that could add up to billions of dollars in compensation.</p>



<p class="">The news organisations are also requesting an order that would force the company to share profits made from using their articles, as well as an injunction prohibiting OpenAI from using them in future.</p>



<p class="">While the lawsuit against OpenAI is a first for Canadian publishers, it follows a similar action in the US launched by the New York Times and other publishers last year. In April, lawyers for the Times accused OpenAI of erasing evidence they needed for trial.</p>



<p class="">In another case, the Authors Guild and a group of major writers including John Grisham also claimed copyright infringement.</p>



<p class="">Earlier this week, the Wall Street Journal reported that OpenAI was valued at C$219bn after its latest round of fundraising from investors.</p>
]]></content:encoded>
					
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">17172</post-id>	</item>
	</channel>
</rss>

<!--
Performance optimized by W3 Total Cache. Learn more: https://www.boldgrid.com/w3-total-cache/?utm_source=w3tc&utm_medium=footer_comment&utm_campaign=free_plugin


Served from: news.mazzaltov.com @ 2026-04-24 19:53:20 by W3 Total Cache
-->