<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Publishing on Big Muddy</title><link>https://muddy.jprs.me/tags/publishing/</link><description>Recent content in Publishing on Big Muddy</description><generator>Hugo</generator><language>en-US</language><lastBuildDate>Fri, 10 Apr 2026 18:27:00 -0400</lastBuildDate><atom:link href="https://muddy.jprs.me/tags/publishing/index.xml" rel="self" type="application/rss+xml"/><item><title>Scientists invent a fake disease, AI picks it up, other scientists cite it</title><link>https://muddy.jprs.me/links/2026-04-10-scientists-invent-a-fake-disease-ai-picks-it-up-other-scientists-cite-it/</link><pubDate>Fri, 10 Apr 2026 18:27:00 -0400</pubDate><guid>https://muddy.jprs.me/links/2026-04-10-scientists-invent-a-fake-disease-ai-picks-it-up-other-scientists-cite-it/</guid><description>&lt;p&gt;A somewhat disturbing bit of reporting from &lt;em&gt;Nature&lt;/em&gt; tells the story of bixonimania, a fake eye disease invented by Swedish medical researcher Almira Osmanovic Thunström and her team. She seeded the idea for the fake disease in a series of ridiculous, joke-filled blog posts and preprints in mid-2024.&lt;/p&gt;
&lt;p&gt;Because AI can be overly credulous with its sourcing (how often do Google&amp;rsquo;s AI answers confident cite random Reddit posts for the bulk of an answer?), the disease got picked up as an &amp;ldquo;emerging term&amp;rdquo; by the leading chatbots. The preprints even got cited a handful of times in real publications, which is further evidence that scientists don&amp;rsquo;t read the papers they cite (I guess the modern equivalent of copying citations from other papers is having AI dredge the literature for you).&lt;/p&gt;
&lt;p&gt;I can see AI agents being exploited by those pushing dubious medical diagnoses to flood the Internet and preprint servers with articles aimed at convincing LLMs of the validity of their positions. That is if the agents aren&amp;rsquo;t too busy &lt;a href="https://theshamblog.com/an-ai-agent-published-a-hit-piece-on-me/"&gt;spinning of websites to defame&lt;/a&gt; those who incur their wrath.&lt;/p&gt;</description></item><item><title>AI makes it easier to generate fake papers, too</title><link>https://muddy.jprs.me/links/2026-04-08-ai-makes-it-easier-to-generate-fake-papers-too/</link><pubDate>Wed, 08 Apr 2026 20:09:00 -0400</pubDate><guid>https://muddy.jprs.me/links/2026-04-08-ai-makes-it-easier-to-generate-fake-papers-too/</guid><description>&lt;p&gt;Here&amp;rsquo;s a fun project from Tyler Vigen, creator of the famous &lt;a href="https://tylervigen.com/spurious-correlations"&gt;Spurious Correlations&lt;/a&gt; page (which has been cited as a cautionary tale in many a science class). Using his database of real but spurious correlations (created by calculating the Pearson correlation coefficient &lt;em&gt;r&lt;/em&gt; between a very large number of variables and picking out the hits), he used AI to create amusing fake manuscripts expounding on these statistical flukes as if they were real research questions.&lt;/p&gt;
&lt;p&gt;These papers were generated in January 2024, and as &lt;a href="https://muddy.jprs.me/links/2026-02-12-an-end-to-end-ai-pipeline-for-policy-evaluation-papers/"&gt;previously discussed&lt;/a&gt; on this blog, the pipeline for end-to-end paper generation has come a long way in two years. I have no doubt Tyler could make these paper&amp;rsquo;s sound much more convincing using today&amp;rsquo;s models, though of course his goal here is to make you laugh (and think), not to trick you. But I have no doubt there will be many scholars adopting this data dredging strategy to generate &amp;ldquo;real&amp;rdquo; papers, contributing to a deluge of papers &lt;a href="https://muddy.jprs.me/links/2026-03-03-the-productivity-shock-coming-to-academic-publishing/"&gt;flooding &lt;/a&gt;&lt;a href="https://muddy.jprs.me/links/2026-03-03-the-productivity-shock-coming-to-academic-publishing/"&gt;the academic publishing system&lt;/a&gt;.&lt;/p&gt;</description></item><item><title>Andrew Gelman's blog schedule</title><link>https://muddy.jprs.me/notes/2026-04-02-andrew-gelman-s-blog-schedule/</link><pubDate>Thu, 02 Apr 2026 16:24:00 -0400</pubDate><guid>https://muddy.jprs.me/notes/2026-04-02-andrew-gelman-s-blog-schedule/</guid><description>&lt;p&gt;Andrew Gelman, professor of statistics at Columbia University, runs one of my &lt;a href="https://statmodeling.stat.columbia.edu/"&gt;favourite blogs&lt;/a&gt; on the Internet. He has been writing there for over 21 years, since &lt;a href="https://statmodeling.stat.columbia.edu/2004/10/12/a_weblog_for_re/"&gt;October 2004&lt;/a&gt;. Many of his collaborators also contribute to the blog, but he is the primary author. In a &lt;a href="https://statmodeling.stat.columbia.edu/2024/09/17/20-years-of-blogging-what-are-your-favorite-posts/"&gt;2024 post&lt;/a&gt; celebrating 20 years of blogging, Gelman mentions having over 12,000 posts. This is a cadence of over 1.6 posts/day sustained for two decades!&lt;/p&gt;
&lt;p&gt;One of the more unusual things about Gelman&amp;rsquo;s blog is that most posts are not particularly topical. Sure, many posts are time-sensitive, posting about upcoming events or commenting on recent publications (like &lt;a href="https://statmodeling.stat.columbia.edu/2022/01/07/pnas-gigo-qrp-wtf-approaching-the-platonic-ideal-of-junk-science/"&gt;doing damage control&lt;/a&gt; on deeply flawed papers like to receive attention). But there is generally one non-topical post each day. A line in a &lt;a href="https://statmodeling.stat.columbia.edu/2026/04/01/this-evil-lottery-scam-appears-to-be-aided-and-abetted-by-google-apple-yahoo-morningstar-msn-etc-etc/"&gt;recent post&lt;/a&gt; caught my eye:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;As regular readers know, our posts are usually on a 6-month lag, but this one is so important I had to share it with you right away.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;As a regular reader myself, I was aware of the delayed posting schedule, but out of curiosity, I wanted to see how far back this habit went. Here&amp;rsquo;s the rough timeline I came up with:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;In &lt;a href="https://statmodeling.stat.columbia.edu/2011/11/13/at-last-treated-with-the-disrespect-that-i-deserve/"&gt;2011&lt;/a&gt;, Gelman wrote that his &amp;ldquo;non-topical blog entries are on approximately one-month delay&amp;rdquo;.&lt;/li&gt;
&lt;li&gt;In &lt;a href="https://statmodeling.stat.columbia.edu/2012/04/09/in-the-future-everyone-will-publishing-everything/"&gt;2012&lt;/a&gt;, he referred to &amp;ldquo;stacking up posts here with a roughly one-month delay&amp;rdquo;.&lt;/li&gt;
&lt;li&gt;In &lt;a href="https://statmodeling.stat.columbia.edu/2014/06/09/hate-polynomials/"&gt;2014&lt;/a&gt;, he said that &amp;ldquo;most of the posts here are on a 1 or 2 month delay.&amp;rdquo;&lt;/li&gt;
&lt;li&gt;In &lt;a href="https://statmodeling.stat.columbia.edu/2016/03/23/in-defense-of-endless-arguments/"&gt;2016&lt;/a&gt;, he casually mentioned &amp;ldquo;our 2-month delay&amp;rdquo;.&lt;/li&gt;
&lt;li&gt;Later that year (August 2016), in a post literally titled &amp;ldquo;&lt;a href="https://statmodeling.stat.columbia.edu/2016/08/02/inbox-zero-and-a-change-of-pace/"&gt;My next 170 blog posts&lt;/a&gt;&amp;rdquo;, he said he had filled &amp;ldquo;the blog through mid-January&amp;rdquo; and had &amp;ldquo;170 blog posts in the queue.&amp;rdquo;&lt;/li&gt;
&lt;li&gt;By &lt;a href="https://statmodeling.stat.columbia.edu/2018/04/21/blogging-different-writing/"&gt;2018&lt;/a&gt;, he mentioned the blog was &amp;ldquo;mostly on a six-month delay&amp;rdquo;.&lt;/li&gt;
&lt;li&gt;In &lt;a href="https://statmodeling.stat.columbia.edu/2019/01/05/dissolving-fermi-paradox/"&gt;2019&lt;/a&gt;, he referred to &amp;ldquo;our 6-month blog delay.&amp;rdquo;&lt;/li&gt;
&lt;li&gt;In &lt;a href="https://statmodeling.stat.columbia.edu/2022/01/07/pnas-gigo-qrp-wtf-approaching-the-platonic-ideal-of-junk-science/"&gt;2022&lt;/a&gt;, he wrote: &amp;ldquo;Usually I schedule these with a 6-month lag, but this time I&amp;rsquo;m posting right away&amp;rdquo;.&lt;/li&gt;
&lt;li&gt;In &lt;a href="https://statmodeling.stat.columbia.edu/2026/02/24/tufte-on-graphs-as-comparisons/"&gt;February 2026&lt;/a&gt;, he said the &amp;ldquo;current end of the blog queue is in early July&amp;rdquo;.&lt;/li&gt;
&lt;li&gt;Then, in &lt;a href="https://statmodeling.stat.columbia.edu/2026/04/01/this-evil-lottery-scam-appears-to-be-aided-and-abetted-by-google-apple-yahoo-morningstar-msn-etc-etc/"&gt;April 2026&lt;/a&gt;, came the latest &amp;ldquo;usually on a 6-month lag&amp;rdquo; remark.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;It seems the blog had about one month of content in the publishing pipeline by 2011, ramped up to one to two months by 2014, two months by early 2016, and finally jumped to six months by August 2016, where it been ever since. Quite the arsenal of scheduled content!&lt;/p&gt;</description></item><item><title>Some insight into writing a book using Quarto</title><link>https://muddy.jprs.me/links/2026-03-16-some-insight-into-writing-a-book-using-quarto/</link><pubDate>Mon, 16 Mar 2026 20:48:00 -0400</pubDate><guid>https://muddy.jprs.me/links/2026-03-16-some-insight-into-writing-a-book-using-quarto/</guid><description>&lt;p&gt;Prof. Kieran Healy (Sociology, Yale University) shares some nice insight into the process of writing a book in Quarto using R in this post. The output screenshots he shares look beautiful, and the idea of deploying the same content as a clean PDF &lt;em&gt;and&lt;/em&gt; a responsive website is awesome. A full draft of the book, &lt;em&gt;Data Visualization: A Practical Introduction (Second Edition)&lt;/em&gt;, is available as a website &lt;a href="https://socviz.co/"&gt;here&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;I have grown increasingly tired of writing in any format other than a plain text file I can easily version control and move around, so the idea of writing a book in Quarto is appealing to me (as long as it has enough technical content to justify the format).&lt;/p&gt;</description></item><item><title>What will the paper of the future look like?</title><link>https://muddy.jprs.me/links/2026-03-10-what-will-the-paper-of-the-future-look-like/</link><pubDate>Tue, 10 Mar 2026 23:48:00 -0400</pubDate><guid>https://muddy.jprs.me/links/2026-03-10-what-will-the-paper-of-the-future-look-like/</guid><description>&lt;p&gt;I am sharing today a short blog post by the Institute for Replication: &amp;ldquo;What will the paper of the future look like?&amp;rdquo;&lt;/p&gt;
&lt;p&gt;In short: research looking more like &lt;a href="https://www.youtube.com/watch?v=zwRdO9_GGhY"&gt;software development&lt;/a&gt; (as presaged by Prof. Richard McElreath, author of the excellent &lt;em&gt;Statistical Rethinking&lt;/em&gt;), with the ability to reuse common material, formalize results, and remix analyses built into the pipeline.&lt;/p&gt;</description></item><item><title>Editors hate this one weird trick</title><link>https://muddy.jprs.me/notes/2026-03-05-editors-hate-this-one-weird-trick/</link><pubDate>Thu, 05 Mar 2026 20:05:00 -0500</pubDate><guid>https://muddy.jprs.me/notes/2026-03-05-editors-hate-this-one-weird-trick/</guid><description>&lt;p&gt;Given my &lt;a href="https://muddy.jprs.me/links/2026-03-03-the-productivity-shock-coming-to-academic-publishing/"&gt;recent&lt;/a&gt; &lt;a href="https://muddy.jprs.me/notes/2026-02-26-these-academic-journal-ai-policies-aren-t-going-to-last/"&gt;posts&lt;/a&gt; on AI in academic publishing, I just wanted to share this joke from Prof. Arthur Spirling on &lt;a href="https://x.com/arthur_spirling/status/2029006543765520471"&gt;Twitter&lt;/a&gt;:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;Actually you cant run my paper through Claude to desk reject it because Claude is a regular coauthor of mine. Conflict of interest. Checkmate, editors&lt;/p&gt;
&lt;/blockquote&gt;</description></item><item><title>The productivity shock coming to academic publishing</title><link>https://muddy.jprs.me/links/2026-03-03-the-productivity-shock-coming-to-academic-publishing/</link><pubDate>Tue, 03 Mar 2026 19:33:00 -0500</pubDate><guid>https://muddy.jprs.me/links/2026-03-03-the-productivity-shock-coming-to-academic-publishing/</guid><description>&lt;p&gt;Today, I wanted to share this piece from economist Scott Cunningham (Baylor University), who wrote about how AI is widening the gap between research and publishing. Or, in economics terms (emphasis mine):&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;But what happens when the same &lt;strong&gt;productivity shock&lt;/strong&gt; hits a system where the bottleneck was never really production in the first place, but rather was a hierarchical journal structure that depended immensely on editor time, skill, discretion and voluntary workers with the same talents called referees for screening quality deemed sufficient for publication?&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;The post mentions the Autonomous Policy Evaluation project—the end-to-end AI paper pipeline I &lt;a href="https://muddy.jprs.me/links/2026-02-12-an-end-to-end-ai-pipeline-for-policy-evaluation-papers/"&gt;wrote about a few weeks ago&lt;/a&gt;—and discusses the likely consequences of this flood of AI-generated papers. Assuming the number of publication slots in reputable journals is relatively fixed, AI-generated papers should add a very large amount of mass to the left side of the paper quality distribution. Acceptance rates will plummet and journals may rely on other signals of quality (name recognition, pedigree, institution) to thin the herd before actually reviewing content. As always, the rich get richer!&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;But this is imperfect, not to mention unfair, and so desk rejection gets noisier: good papers get killed by tired editors and marginally lower quality papers slip through to referees. It’s a cascading failure: volume breaks editors, broken editing wastes referees, wasted referees slow science.&lt;/p&gt;</description></item><item><title>These academic journal AI policies aren't going to last</title><link>https://muddy.jprs.me/notes/2026-02-26-these-academic-journal-ai-policies-aren-t-going-to-last/</link><pubDate>Thu, 26 Feb 2026 16:51:00 -0500</pubDate><guid>https://muddy.jprs.me/notes/2026-02-26-these-academic-journal-ai-policies-aren-t-going-to-last/</guid><description>&lt;p&gt;I recently came across the following policy on the &lt;a href="https://spectrumjournal.ca/index.php/spectrum/about/submissions"&gt;submission page&lt;/a&gt; of an academic journal:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Use of Artificial Intelligence (AI) tools&lt;/strong&gt;: One of the goals of &lt;em&gt;Spectrum&lt;/em&gt; is to stimulate critical thinking and skill development among authors and reviewers alike. &lt;em&gt;Spectrum&lt;/em&gt; discourages the submission of content generated by artificial intelligence (AI)-assisted technologies (such as chatGPT and similar tools). This includes tools that generate text, data, images, figures, or other materials, as well as tools that are used to summarize and synthesize sources. Authors should be aware that such tools are vulnerable to factual inaccuracies, biases, and logical fallacies, and may pose risks to privacy, confidentiality, and copyright.&lt;/p&gt;
&lt;p&gt;If authors choose to submit work created with the assistance of AI tools, such use &lt;strong&gt;must be disclosed&lt;/strong&gt; and described in the submission. The disclosure must include: 1) what system was used, 2) who used it, 3) the time/date of the use, 4) the prompt(s) used to generate the content, and 5) the content in the submission that resulted from use of AI tools. The output from the AI system should also be submitted as supplementary material. Authors must accept full responsibility for the accuracy and integrity of the submission. AI systems do not meet the criteria for authorship, and should not be listed as a co-author.&lt;/p&gt;</description></item><item><title>More on vibe researching</title><link>https://muddy.jprs.me/links/2026-02-13-more-on-vibe-researching/</link><pubDate>Fri, 13 Feb 2026 23:49:00 -0500</pubDate><guid>https://muddy.jprs.me/links/2026-02-13-more-on-vibe-researching/</guid><description>&lt;p&gt;To follow on &lt;a href="https://muddy.jprs.me/links/2026-02-12-an-end-to-end-ai-pipeline-for-policy-evaluation-papers/"&gt;yesterday&amp;rsquo;s post&lt;/a&gt; on AI-produced research, here is a reflection on &amp;ldquo;vibe researching&amp;rdquo; from Prof. Joshua Gans of the University of Toronto&amp;rsquo;s Rotman School of Management. Since the release of the first &amp;ldquo;reasoning&amp;rdquo; models in late 2024, he has gone all in on experimenting with AI-first research.&lt;/p&gt;
&lt;p&gt;One of the key takeaways is that he found himself pursuing low quality ideas to completion more often, precisely because the cost of choosing to continue to pursue a questionable idea has been lowered. Sycophancy is a problem, too. With an AI cheerleader, it is easy to convince yourself you have a result when you do not.&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;Those ideas were all fine but not high quality, and what is worse, I didn’t realise that they weren’t that significant until external referees said so. I didn’t realise it because they were reasonably hard to do, and I was happy to have solved them.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;I will note that (human) peer reviewers cannot be the levee that stops the flood of middling AI research: the system of uncompensated labour that undergirds all of academic publishing is already strained to bursting, as every editor desperate to find referees for a paper will tell you.&lt;/p&gt;
&lt;p&gt;Prof. Gans concludes his year-long experiment in &amp;ldquo;vibe researching&amp;rdquo; was a failure, despite publishing many working papers and publishing a handful of them:&lt;/p&gt;</description></item><item><title>An end-to-end AI pipeline for policy evaluation papers</title><link>https://muddy.jprs.me/links/2026-02-12-an-end-to-end-ai-pipeline-for-policy-evaluation-papers/</link><pubDate>Thu, 12 Feb 2026 19:11:00 -0500</pubDate><guid>https://muddy.jprs.me/links/2026-02-12-an-end-to-end-ai-pipeline-for-policy-evaluation-papers/</guid><description>&lt;p&gt;Prof. David Yanagizawa-Drott from the Social Catalyst Lab at the University of Zurich has launched Project APE (Autonomous Policy Evaluation), an end-to-end AI pipeline to generate policy evaluation papers. The vast majority of policies around the world are never rigorously evaluated, so it would certainly be useful if we were able to do so in an automated fashion.&lt;/p&gt;
&lt;p&gt;Claude Code is the heart of the project, but other models are used to review the outputs and provide journal-style referee reports. All the coding is done in R (though Python is called in some scripts). Currently, judging is done by Gemini 3 Flash to compare against published research in top economics journals:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;Blind comparison: An LLM judge compares two papers without knowing which is AI-generated
Position swapping: Each pair is judged twice with paper order swapped to control for bias
TrueSkill ratings: Papers accumulate skill ratings that update after each match&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;The project&amp;rsquo;s home page lists the AI&amp;rsquo;s current &amp;ldquo;win rate&amp;rdquo; at 3.5% in head-to-head matchups against human-written papers.&lt;/p&gt;
&lt;p&gt;Prof. Yanagizawa-Drott says &amp;ldquo;Currently it requires at a minimum some initial human input for each paper,&amp;rdquo; although he does not specify exactly what. If we look at &lt;a href="https://github.com/SocialCatalystLab/ape-papers/blob/main/apep_0264/v1/initialization.md"&gt;&lt;code&gt;initialization.json&lt;/code&gt;&lt;/a&gt; that can be found in each paper&amp;rsquo;s directory, we see the following questions with user-provided inputs:&lt;/p&gt;
&lt;blockquote&gt;
&lt;ol&gt;
&lt;li&gt;Policy domain: What policy area interests you?&lt;/li&gt;
&lt;li&gt;Method: Which identification method?&lt;/li&gt;
&lt;li&gt;Data era: Modern or historical data?&lt;/li&gt;
&lt;li&gt;API keys: Did you configure data API keys?&lt;/li&gt;
&lt;li&gt;External review: Include external model reviews?&lt;/li&gt;
&lt;li&gt;Risk appetite: Exploration vs exploitation?&lt;/li&gt;
&lt;li&gt;Other preferences: Any other preferences or constraints?&lt;/li&gt;
&lt;/ol&gt;
&lt;/blockquote&gt;
&lt;p&gt;The code, reviews, manuscript, and even the results of the initial idea generation process are all available on &lt;a href="https://github.com/SocialCatalystLab/ape-papers"&gt;GitHub&lt;/a&gt;. Their immediate goal is to generate a sample of 1,000 papers and run human evaluations on them (at time of posting, there are 264 papers in the GitHub repository).&lt;/p&gt;</description></item><item><title>Why a Canadian news site just launched an AI publishing tool</title><link>https://muddy.jprs.me/links/2026-02-09-why-a-canadian-news-site-just-launched-an-ai-publishing-tool/</link><pubDate>Mon, 09 Feb 2026 19:49:00 -0500</pubDate><guid>https://muddy.jprs.me/links/2026-02-09-why-a-canadian-news-site-just-launched-an-ai-publishing-tool/</guid><description>&lt;p&gt;It&amp;rsquo;s no secret that Canadian journalism (like journalism everywhere) is in trouble. Newsrooms face a steady stream of layoffs despite a couple hundred million Canadian dollars of direct and indirect &lt;a href="https://macdonaldlaurier.ca/government-subsidies-for-canadas-media-were-supposed-to-be-temporary-but-they-keep-on-growing-and-could-be-here-to-stay-dave-snow-in-the-hub/"&gt;government subsidies&lt;/a&gt; every year. The vast majority of outlets eligible for these subsidies take advantage of them, and combined they can &lt;a href="https://macdonaldlaurier.ca/government-subsidies-for-canadas-media-were-supposed-to-be-temporary-but-they-keep-on-growing-and-could-be-here-to-stay-dave-snow-in-the-hub/"&gt;subsidize half of a journalist&amp;rsquo;s salary&lt;/a&gt;. News organizations are desperate to diversify their revenue streams.&lt;/p&gt;
&lt;p&gt;&lt;a href="thehub.ca/2025/03/28/rudyard-griffiths-and-sean-speer-the-hub-is-receiving-over-60000-from-the-government-and-donating-it-all-to-charity-will-the-rest-of-canadas-subsidized-media-disclose-what-theyre-gettin/"&gt;&lt;em&gt;The Hub&lt;/em&gt;&lt;/a&gt; is a right-leaning publication launched in 2021 with a focus on policy and politics. Notably, the outlet &lt;a href="https://macdonaldlaurier.ca/the-ottawa-declaration-on-canadian-journalism/"&gt;declines&lt;/a&gt; or &lt;a href="https://thehub.ca/2025/03/28/rudyard-griffiths-and-sean-speer-the-hub-is-receiving-over-60000-from-the-government-and-donating-it-all-to-charity-will-the-rest-of-canadas-subsidized-media-disclose-what-theyre-gettin/"&gt;donates&lt;/a&gt; their subsidies, citing a valid concern that the scale of such subsidies &lt;a href="https://thehub.ca/2024/07/08/deepdive-government-funding-of-the-news-industry-is-eroding-canadians-trust-in-the-media/"&gt;threaten the perceived trustworthiness and independence&lt;/a&gt; of the media.&lt;/p&gt;
&lt;p&gt;In late January 2026, &lt;em&gt;The Hub&lt;/em&gt; &lt;a href="https://thehub.ca/2026/01/28/why-we-are-launching-newsbox-for-the-hubs-paid-subscribers/"&gt;launched NewsBox&lt;/a&gt;, an AI-powered publishing tool. NewsBox aims to make it easier for creators to transform their content (written, audio, or video) into other formats, such as speeches, essays, or talking points, while maintaining the author&amp;rsquo;s distinct voice. You can see examples of the tool&amp;rsquo;s output on new articles in &lt;em&gt;The Hub&lt;/em&gt;, each of which is accompanied by an AI-generated summary and list of quotes at the top of the page. There is also a &amp;ldquo;Hub AI&amp;rdquo; chatbot in the sidebar of every article.&lt;/p&gt;
&lt;p&gt;The app very much uses &lt;em&gt;The Hub&lt;/em&gt;&amp;rsquo;s branding, prominently featuring the outlet’s co-creators, who also created NewsBox. While their pitch talks about preserving creators&amp;rsquo; voices to avoid the &amp;ldquo;soulless prose&amp;rdquo; and &amp;ldquo;slop&amp;rdquo; outputted by ChatGPT and similar tools, I have to wonder if tighter integration of AI into the news and opinion side of the operation will &lt;a href="https://reutersinstitute.politics.ox.ac.uk/generative-ai-and-news-report-2025-how-people-think-about-ais-role-journalism-and-society"&gt;raise its own issues with trust&lt;/a&gt;. &lt;em&gt;The Hub&lt;/em&gt; has always been fairly tech-friendly, including a &lt;a href="https://thehub.ca/2023/12/20/marc-edge-canadas-news-media-need-a-plan-and-some-help-to-find-a-way-forward/"&gt;longstanding&lt;/a&gt; &lt;a href="https://thehub.ca/category/meta/"&gt;sponsorship&lt;/a&gt; by Meta.&lt;/p&gt;</description></item></channel></rss>