Bookmarks 2025-05-21T01:36:52.077Z
by Owen Kibel
37 min read
29 New Bookmarks
Favicon | Details |
---|---|
(25) Sec. Marco Rubio testifies on State Dept. budget request - YouTube May 20, 2025 Sec. Marco Rubio testifies on State Dept. budget request YouTube Secretary of State Marco Rubio appears before the Senate Foreign Relations Committee to review the department's 2026 budget request. #foxnews #news #us #foxS... |
| | (25) The Golden Dome Missile Defense Shield - YouTube
May 20, 2025
The Golden Dome Missile Defense Shield
YouTube
President Trump announced the Golden Dome missile defense shield to protect the homeland from advanced missile threats.Included in the One, Big, Beautiful Bi... |
| | Kekius Maximus on X: "lmao https://t.co/NLewn340J0" / X
May 20, 2025 |
| | musenet · GitHub Topics
May 20, 2025
Build software better, together
GitHub
GitHub is where people build software. More than 150 million people use GitHub to discover, fork, and contribute to over 420 million projects. |
| | The Golden Dome Missile Defense Shield - YouTube
May 20, 2025
The Golden Dome Missile Defense Shield
YouTube
President Trump announced the Golden Dome missile defense shield to protect the homeland from advanced missile threats.Included in the One, Big, Beautiful Bi... |
| | 1. Chorus: Herr, unser Herrscher - YouTube Music
May 20, 2025
1. Chorus: Herr, unser Herrscher - YouTube Music
YouTube Music
Provided to YouTube by Naxos Digital Services 1. Chorus: Herr, unser Herrscher · Karl-Friedrich Beringer · Windsbacher Knabenchor · MĂŒnchener Kammerorcheste... |
| | Trump Warns Republican Budget-Bill Holdouts of Being âKnocked Outâ of Party - WSJ
May 20, 2025 |
| | RTX 5080 Super rumored with 24GB of memory â Same 10,752 CUDA cores as the vanilla variant with a 400W+ TGP | Tom's Hardware
May 20, 2025
RTX 5080 Super rumored with 24GB of memory â Same 10,752 CUDA cores as the vanilla variant with a 400W+ TGP
Tom's Hardware
50% more VRAM than the standard RTX 5080. |
| | Rare-Earths Plants Are Popping Up Outside China - WSJ
May 20, 2025 |
| | Rust turns 10: How a broken elevator changed software forever | ZDNET
May 20, 2025
Rust turns 10: How a broken elevator changed software forever
ZDNET
Rust 1.0 shipped in May 2015. Here's how it came about and why it marked a turning point in the world of software development. |
| | How Peter Thielâs Relationship With Eliezer Yudkowsky Launched the AI Revolution | WIRED
May 20, 2025
How Peter Thielâs Relationship With Eliezer Yudkowsky Launched the AI Revolution
WIRED
The AI doomer and the AI boomer both created each other's monsters. An excerpt from "The Optimist: Sam Altman, OpenAI, and the Race to Invent the Future."
Each year, Altman would point Thiel toward the most promising startup at Y CombinatorâAirbnb in 2012, Stripe in 2013, Zenefits in 2014âand Thiel would swallow hard and invest, even though he sometimes felt like he was being swept up in a hype cycle. Following Altmanâs advice brought Thielâs Founders Fund some immense returns. Thiel, meanwhile, became the loudest voice critiquing the lack of true technological progress amidst all the hype. âForget flying cars,â he quipped during a 2012 Stanford lecture. âWeâre still sitting in traffic.â By the time Altman took over Y Combinator in 2014, he had internalized Thielâs critique of âtech stagnationâ and channeled it to remake YC as an investor in âhard techâ moonshots like nuclear energy, supersonic planesâand artificial intelligence. Now it was Altman who was increasingly taking his cues from Thiel. And if itâs hard to exaggerate Thielâs effect on Altman, itâs similarly easy to understate the influence that an AI-obsessed autodidact named Eliezer Yudkowsky had on Thielâs early investments in AI. Though he has since become perhaps the worldâs foremost AI doomsday prophet, Yudkowsky started out as a magnetic, techno-optimistic wunderkind who excelled at rallying investors, researchers, and eccentrics around a quest to âaccelerate the singularity.â In this excerpt from the forthcoming book The Optimist, Keach Hagey describes how Thielâs relationship with Yudkowsky set the stage for the generative AI revolution: How it was Yudkowsky who first inspired one of the founders of DeepMind to imagine and build a âsuperintelligence,â and Yudkowsky who introduced the founders of DeepMind to Thiel, one of their first investors. How Thielâs conversations with Altman about DeepMind would help inspire the creation of OpenAI. And how Thiel, as one of Yudkowskyâs most important backers, inadvertently seeded the AI-apocalyptic subcultures that would ultimately play a role in Sam Altman's ouster, years later, as CEO of OpenAI. Among the many other people influenced by Vingeâs fiction was Eliezer Yudkowsky. Born into an Orthodox Jewish family in 1979 in Chicago, Yudkowsky was son of a psychiatrist mother and a physicist father who went on to work at Bell Labs and Intel on speech recognition, and was himself a devoted sci-Âfi fan. Yudkowsky began reading science fiction at age 7 and writing it at age 9. At 11, he scored a 1410 on the SAT. By seventh grade, he told his parents he could no longer tolerate school. He did not attend high school. By the time he was 17, he was painfully aware that he was not like other people, posting a web page declaring that he was a âgeniusâ but ânot a Nazi.â He rejected being defined as a âmale teenager,â instead preferring to classify himself as an âAlgernon,â a reference to the famous Daniel Keyes short story about a lab mouse who gains enhanced intelligence. Thanks to Vinge, he had discovered the meaning of life. âThe sole purpose of this page, the sole purpose of this site, the sole purpose of anything I ever do as an Algernon is to accelerate the Singularity,â he wrote. Around this time, Yudkowsky discovered an obscure mailing list of a society calling itself the Extropians, which was the subject of a 1994 article in Wired that happened to include their email address at the end. Founded by philosopher Max More in the 1980s, Extropianism is a form of pro-Âscience super-Âoptimism that seeks to fight entropyâÂthe universal law that says things fall apart, everything tends toward chaos and deathâÂon all fronts. In practical terms, this meant signing up to have their bodiesâÂor at least headsâÂfrozen at negative 321 degrees Fahrenheit at the Alcor Life Extension Foundation in Scottsdale, Arizona, after they died. They would be revived once humanity was technologically advanced enough to do so. More philosophically, fighting entropy meant abiding by five principles: Boundless Expansion, Self-ÂTransformation, Dynamic Optimism, Intelligent Technology, and Spontaneous Order. (Dynamic Optimism, for example, involved a technique called selective focus, in which youâd concentrate on only the positive aspects of a given situation.) Robin Hanson, who joined the movement and became renowned for creating prediction markets, described attending multilevel Extropian parties at big houses in Palo Alto at the time. âAnd I was energized by them, because they were talking about all these interesting ideas. And my wife was put off because they were not very well presented, and a little weird,â he said. âWe all thought of ourselves as people who were seeing where the future was going to be, and other people didnât get it. EventuallyâÂeventuallyâÂweâd be right, but who knows exactly when.â Moreâs coÂfounder of the journal Extropy, Tom Bell, aka T. O. Morrow (Bell claims that Morrow is a distinct persona and not simply a pen name), wrote about systems of âpolycentric lawâ that could arise organically from voluntary transactions between agents free of government interference, and of âFree Oceana,â a potential Extropian settlement on a man-Âmade floating island in international waters. (Bell ended up doing pro bono work years later for the Seasteading Institute, for which Thiel provided seed funding.) If this all sounds more than a bit libertarian, thatâs because it was. The WIRED article opens at one such Extropian gathering, during which an attendee shows up dressed like the âState,â wearing a vinyl bustier, miniskirt, and chain harness top and carrying a riding crop, dragging another attendee dressed up as âthe Taxpayerâ on a leash on all fours. The mailing list and broader Extropian community had only a few hundred members, but among them were a number of famous names, including Hanson; Marvin Minsky, the Turing Awardâwinning scientist who founded MITâs AI lab in the late 1950s; Ray Kurzweil, the computer scientist and futurist whose books would turn âthe singularityâ into a household word; Nick Bostrom, the Swedish philosopher whose writing would do the same for the supposed âexistential riskâ posed by AI; Julian Assange, a decade before he founded WikiLeaks; and three peopleâÂNick Szabo, Wei Dai, and Hal FinneyâÂrumored to either be or be adjacent to the pseudonymous creator of Bitcoin, Satoshi Nakamoto. âIt is clear from even a casual perusal of the Extropians archive (maintained by Wei Dai) that within a few months, teenage Eliezer Yudkowsky became one of this extraordinary cacophonyâs preeminent voices,â wrote the journalist Jon Evans in his history of the movement. In 1996, at age 17, Yudkowsky argued that superintelligences would be a great improvement over humans, and could be here by 2020. Two members of the Extropian community, internet entrepreneurs Brian and Sabine AtkinsâÂwho met on an Extropian mailing list in 1998 and were married soon afterâÂwere so taken by this message that in 2000 they bankrolled a think tank for Yudkowsky, the Singularity Institute for Artificial Intelligence. At 21, Yudkowsky moved to Atlanta and began drawing a nonprofit salary of around $20,000 a year to preach his message of benevolent superintelligence. âI thought very smart things would automatically be good,â he said. Within eight months, however, he began to realize that he was wrongâÂway wrong. AI, he decided, could be a catastrophe. The Atkinses were understanding, and the instituteâs mission pivoted from making artificial intelligence to making friendly artificial intelligence. âThe part where we needed to solve the friendly AI problem did put an obstacle in the path of charging right out to hire AI researchers, but also we just surely didnât have the funding to do that,â Yudkowsky said. Instead, he devised a new intellectual framework he dubbed ârationalism.â (While on its face, rationalism is the belief that humankind has the power to use reason to come to correct answers, over time it came to describe a movement that, in the words of writer Ozy Brennan, includes âreductionism, materialism, moral non-Ârealism, utilitarianism, anti-Âdeathism and transhumanism.â Scott Alexander, Yudkowskyâs intellectual heir, jokes that the movement's true distinguishing trait is the belief that âEliezer Yudkowsky is the rightful calif.â) In a 2004 paper, âCoherent Extrapolated Volition,â Yudkowsky argued that friendly AI should be developed based not just on what we think we want AI to do now, but what would actually be in our best interests. âThe engineering goal is to ask what humankind âwants,â or rather what we would decide if we knew more, thought faster, were more the people we wished we were, had grown up farther together, etc.,â he wrote. In the paper, he also used a memorable metaphor, originated by Bostrom, for how AI could go wrong: If your AI is programmed to produce paper clips, if youâre not careful, it might end up filling the solar system with paper clips. In 2005, Yudkowsky attended a private dinner at a San Francisco restaurant held by the Foresight Institute, a technology think tank founded in the 1980s to push forward nanotechnology. (Many of its original members came from the L5 Society, which was dedicated to pressing for the creation of a space colony hovering just behind the moon, and successfully lobbied to keep the United States from signing the United Nations Moon Agreement of 1979 due to its provision against terraforming celestial bodies.) Thiel was in attendance, regaling fellow guests about a friend who was a market bellwether, because every time he thought some potential investment was hot, it would tank soon after. Yudkowsky, having no idea who Thiel was, walked up to him after dinner. âIf your friend was a reliable signal about when an asset was going to go down, they would need to be doing some sort of cognition that beat the efficient market in order for them to reliably correlate with the stock going downwards,â Yudkowsky said, essentially reminding Thiel about the efficient-market hypothesis, which posits that all risk factors are already priced into markets, leaving no room to make money from anything besides insider information. Thiel was charmed. Thiel and Yudkowsky began having occasional dinners together. Yudkowsky came to regard Thiel âas something of a mentor figure,â he said. In 2005, Thiel started funding Yudkowskyâs Singularity Institute, and the following year they teamed up with Ray KurzweilâÂwhose book The Singularity Is Near had become a bestsellerâÂto create the Singularity Summit at Stanford University. Over the next six years, it expanded to become a prominent forum for futurists, transhumanists, Extropians, AI researchers, and science fiction authors, including Bostrom, More, Hanson, Stanford AI professor Sebastian Thrun, XPrize founder Peter Diamandis, and Aubrey de Grey, a gerontologist who claims humans can eventually defeat aging. Skype coÂfounder Jaan Tallinn, who participated in the summit, was inspired by Yudkowsky to become one of the primary funders of research dedicated to reducing existential risk from AI. Another summit participant, physicist Max Tegmark, would go on to co-found the Future of Life Institute. Vernor Vinge himself even showed up, looking like a public school chemistry teacher with his Walter White glasses and tidy gray beard, cheerfully reminding the audience that when the singularity comes, âWeâre no longer in the driverâs seat.â In 2010, one of the AI researchers whom Yudkowsky invited to speak at the summit was Shane Legg, a New ZealandâÂborn mathematician, computer scientist, and ballet dancer who had been obsessed with building superintelligence ever since Yudkowsky had introduced him to the idea a decade before. Legg had been working at Intelligenesis, a New YorkâÂbased startup founded by the computer scientist Ben Goertzel that was trying to develop the worldâs first AI. Its best-Âknown product was WebMind, an ambitious software project that attempted to predict stock market trends. Goertzel, who had a PhD in mathematics, had been an active poster on the Extropians mailing list for years, sparring affectionately with Yudkowsky on transhumanism and libertarianism. (He was in favor of the former but not so much the latter.) Back in 2000, Yudkowsky came to speak at Goertzelâs company (which would go bankrupt within a year). Legg points to the talk as the moment when he started to take the idea of superintelligence seriously, going beyond the caricatures in the movies. Goertzel and Legg began referring to the concept as âartificial general intelligence.â Legg went on to get his own PhD, writing a dissertation, âMachine Super Intelligence,â that noted the technology could become an existential threat, and then moved into a postdoctoral fellowship at University College Londonâs Gatsby Computational Neuroscience Unit, a lab that encompassed neuroscience, machine learning, and AI. There, he met a gaming savant from London named Demis Hassabis, the son of a Singaporean mother and Greek Cypriot father. Hassabis had once been the second-Âranked chess player in the world under the age of 14. Now he was focused on building an AI inspired by the human brain. Legg and Hassabis shared a common, deeply unfashionable vision. âIt was basically eye-Ârolling territory,â Legg told the journalist Cade Metz. âIf you talked to anybody about general AI, you would be considered at best eccentric, at worst some kind of delusional, nonscientific character.â Legg thought it could be built in the academy, but Hassabis, who had already tried a startup and failed, knew better. The only way to do it was through industry. And there was one investor who would be an obvious place to start: Peter Thiel. In the morning, they pitched Thiel, fresh from a workout, across his dining room table. Hassabis said they were building AGI inspired by the human brain, would initially measure its progress by training it to play games, and were confident that advances in computing power would drive their breakthroughs. Thiel balked at first, but over the course of weeks agreed to invest $2.25 million, becoming the as-Âyet-Âunnamed companyâs first big investor. A few months later, Hassabis, Legg, and their friend, the entrepreneur Mustafa Suleyman, officially coÂfounded DeepMind, a reference to the companyâs plans to combine âdeep learning,â a type of machine learning that uses layers of neural networks, with actual neuroscience. From the beginning, they told investors that their goal was to develop AGI, even though they feared it could one day threaten humanityâs very existence. It was through Thielâs network that DeepMind recruited his fellow PayPal veteran Elon Musk as an investor. Thielâs Founders Fund, which had invested in Muskâs rocket company, SpaceX, invited Hassabis to speak at a conference in 2012, and Musk was in attendance. Hassabis laid out his 10-Âyear plan for DeepMind, touting it as a âManhattan Projectâ for AI years before Altman would use the phrase. Thiel recalled one of his investors joking on the way out that the speech was impressive, but he felt the need to shoot Hassabis to save the human race. The next year, Luke Nosek, a cofounder of both PayPal and Founders Fund who is friends with Musk and sits on the SpaceX board, introduced Hassabis to Musk. Musk took Hassabis on a tour of SpaceXâs headquarters in Los Angeles. When the two settled down for lunch in the company cafeteria, they had a cosmic conversation. Hassabis told Musk he was working on the most important thing in the world, a superintelligent AI. Musk responded that he, in fact, was working on the most important thing in the world: turning humans into an interplanetary species by colonizing Mars. Hassabis responded that that sounded great, so long as a rogue AI did not follow Musk to Mars and destroy humanity there too. Musk got very quiet. He had never really thought about that. He decided to keep tabs on DeepMindâs technology by investing in it. In December 2013, Hassabis stood on stage at a machine-learning conference at Harrahâs in Lake Tahoe and demonstrated DeepMindâs first big breakthrough: an AI agent that could learn to play and then quickly master the classic Atari video game Breakout without any instruction from humans. DeepMind had done this with a combination of deep neural networks and reinforcement learning, and the results were so stunning that Google bought the company for a reported $650 million a month later. The implications of DeepMindâs achievementâÂwhich was a major step toward a general-Âpurpose intelligence that could make sense of a chaotic world around it and work toward a goalâÂwere not widely understood until the company published a paper on its findings in the journal Nature more than a year later. But Thiel, as a DeepMind investor, understood them well, and discussed them with Altman. In February 2014, a month after Google bought DeepMind, Altman wrote a post on his personal blog titled âAIâ that declared the technology the most important tech trend that people were not paying enough attention to. âTo be clear, AI (under the common scientific definition) likely wonât work. You can say that about any new technology, and itâs a generally correct statement. But I think most people are far too pessimistic about its chances,â he wrote, adding that âartificial general intelligence might work, and if it does, it will be the biggest development in technology ever.â This was a race that Yudkowsky had helped set off. But as it picked up speed, Yudkowsky himself was growing increasingly alarmed about what he saw as the extinction-level danger it posed. He was still influential among investors, researchers, and eccentrics, but now as a voice of extreme caution. Yudkowsky was not personally involved in OpenAI, but his blog, LessWrong, was widely read among the AI researchers and engineers who worked there. (While still at Stripe, OpenAI cofounder Greg Brockman had organized a weekly LessWrong reading group.) The rationalist ideas Yudkowsky espoused overlapped significantly with those of the Effective Altruism movement, which was turning much of its attention to preventing existential risk from AI. A few months after this race spilled into full public view with OpenAIâs release of ChatGPT in November 2022, Yudkowsky published an essay in Time magazine arguing that unless the current wave of generative AI research was halted, âliterally everyone on Earth will die.â Thiel felt that Yudkowsky had become âextremely black-pilled and Luddite.â And two of OpenAIâs board members had ties to Effective Altruism. Less than a week before Altman was briefly ousted as CEO in the fall of 2023, Thiel warned his friend, âYou donât understand how Eliezer has programmed half the people in your company to believe this stuff.â Thielâs warning came with some guilt that he had created the many-headed monster that was now coming for his friend.|Excerpt adapted from The Optimist: Sam Altman, OpenAI, and the Race to Invent the Future, by Keach Hagey. Published by arrangement with W. W. Norton & Company. Copyright © 2025 by Keach Hagey.
| | Google I/O 2025: Updates to Gemini 2.5 from Google DeepMind
May 20, 2025
Gemini 2.5: Our most intelligent models are getting even better
Google
At I/O 2025, we shared updates to our Gemini 2.5 model series and Deep Think, an experimental enhanced reasoning mode for 2.5 Pro. |
| | As quantum mechanics turns 100, a new revolution is under way
May 20, 2025
As quantum mechanics turns 100, a new revolution is under way
Science News
With greater control over the quantum realm, physicists are poised to make major leaps in quantum computing, quantum gravity and more. |
| | Googleâs AI Boss Says Gemini's New Abilities Point the Way to AGI | WIRED
May 20, 2025
Googleâs AI Boss Says Gemini's New Abilities Point the Way to AGI
WIRED
Googleâs AI models are learning to reason, wield agency, and build virtual models of the real world. The companyâs AI lead, Demis Hassabis, says all thisâand moreâwill be needed for true AGI.
Google announced a slew of AI upgrades and new products at its annual I/O event today in Mountain View, California. The search giant revealed upgraded versions of Gemini Flash and Gemini Pro, Googleâs fastest and most capable models, respectively. Hassabis said that Gemini Pro outscores other models on LMArena, a widely used benchmark for measuring the abilities of AI models. Hassabis showed off some experimental AI offerings that reflect a vision for artificial intelligence that goes far beyond the chat window. âThe way we've ended up working with today's chatbots is, I think, a transitory period,â Hassabis told WIRED ahead of todayâs event. Hassabis says Geminiâs nascent reasoning, agentic, and world-modeling capabilities could enable much more capable and proactive personal assistants, truly useful humanoid robots, and eventually AI that is as smart as any person. At I/O, Google revealed Deep Think, a more advanced kind of simulated reasoning for the Pro model. The latest AI models can break down problems and deliberate over them in a way that more closely resembles human reasoning than the instinctive output of standard large language models. Deep Think uses more compute time and several undisclosed innovations to improve upon this trick, says Tulsee Doshi, product lead for the Gemini models. Google today unveiled new products that rely on Geminiâs ability to reason and take action. This includes Mariner, an agent for the Chrome browser that can go off and do chores like shopping when given a command. Mariner will be offered as a âresearch previewâ through a new subscription plan called Google AI Ultra costing a hefty $249.99 per month. Google also showed off a more capable version of Googleâs experimental assistant Astra, which can see and hear the world through a smartphone or a pair of smart glasses. As well as converse about the world around it, Astra can now operate a smartphone when needed, for example using apps or searching the web to find useful information. Google showed a scene in which a user had Atra help look for parts needed for bike repairs. Doshi adds that Gemini is being trained to better understand how to preempt a userâs needs, starting with firing off a web search when this might be useful. Future assistants will need to be proactive without being annoying, both Doshi and Hassabis say. Astraâs abilities depend on Gemini modeling the physical world to understand how it works, something Hassabis says is crucial to biological intelligence. AI will need to hone its reasoning, agency, and inventiveness, too, he says. âThere are missing capabilities.â Well before AGI arrives, AI promises to upend the way people search the web, something that may affect Googleâs core business profoundly. The company announced new efforts to adapt search to the era of AI at I/O (see WIREDâs I/O liveblog for everything announced today). Google will roll out an AI-powered version of search called AI Mode to everyone in the US and will introduce an AI-powered shopping tool that lets users upload a photo to see how an item of clothing would look on them. The company will also make AI Overviews, a service that summarizes results for Google users, available in more countries and languages. Shifting Timelines Some AI researchers and pundits argue that AGI may be just a few years awayâor even here already depending on how you define the term. Hassabis says it may take five to 10 years for machines to master everything a human can do. âThat's still quite imminent in the grand scheme of things,â Hassabis says. âBut it's not tomorrow or next year.â Hassabis says reasoning, agency, and world modeling should not only enable assistants like Astra but also give humanoid robots the brains they need to operate reliably in the messy real world. DeepMind is currently collaborating with Apptroniks, one humanoid maker. A number of other companies, including big players like Tesla and startups such as Agility, Figure AI, and 1X are also building humanoids and touting their usefulness for factory and warehouse work. The ways these robots can be used is, however, very limited because they lack general intelligence. âWhat is missing from robotics is not so much the robot itself, but its understanding of its physical context,â Hassabis says, adding that this is especially true for a home robot that would need to operate in complex and unfamiliar environments. In March, Google introduced Gemini Robotics, a version of its model capable of operating some robots. Hassabis says that AI must become more inventive, too, if it is to imitate human intelligence faithfully. âCould [todayâs models] invent general relativity with the knowledge that Einstein had in 1900? Clearly not,â he says. Google is currently exploring ways to coax greater inventiveness out of AI models. The company recently unveiled AlphaEvolve, a coding agent capable of coming up with new algorithms for longstanding problems. Hassabis says it may be possible to expand this creativity to areas beyond math and coding by having AI play games inside realistic 3D worlds. This would represent something of a return to DeepMindâs roots, since the company made its name developing AI programs capable of playing video and board games. âYou wonât be surprised to learn that I'm keen on games again as a testing ground for that,â Hassabis says. Hassabis says AI may learn the same way that the board-game programs AlphaGo and AlphaZero learned to play chess and Go, although this will involve more ambitious world modelling. âYou want a world model instead of a game model,â he says. âWe think that's critical for AGI to really understand the world.â|
| | President Trump Makes an Announcement with the Secretary of Defense - YouTube
May 20, 2025
President Trump Makes an Announcement with the Secretary of Defense
YouTube
The White House |
| | Press Secretary Karoline Leavitt Holds a Press Briefing for Take Our Sons and Daughters to Work Day - YouTube
May 20, 2025
Press Secretary Karoline Leavitt Holds a Press Briefing for Take Our Sons and Daughters to Work Day
YouTube
The White House |
| | From the Oval Office: President Trump Presents Medals of Sacrifice - YouTube
May 20, 2025
From the Oval Office: President Trump Presents Medals of Sacrifice
YouTube
President Trump posthumously awards the first-ever Medals of Sacrifice to three heroic law enforcement officersâđșđž Corporal Luis Paez Jr.đșđž Deputy Sherif... |
| | Leftists Have NO STANDARDS - YouTube
May 20, 2025
Leftists Have NO STANDARDS
YouTube
SUPPORT THE SHOW BUY CAST BREW COFFEE NOW - https://castbrew.com/Sign Up For Exclusive Episodes At https://timcast.com/Merch - https://timcast.creator-spring... |
| | Democrats FURIOUS, Try To BLOCK White Refugees - YouTube
May 20, 2025
Democrats FURIOUS, Try To BLOCK White Refugees
YouTube
BUY CAST BREW COFFEE TO SUPPORT THE SHOW - https://castbrew.com/Become A Member And Protect Our Work at http://www.timcast.comHost:Libby Emmons @libbyemmons ... |
| | "One of Us Didn't Miss the Biggest Story of the Century": Megyn Kelly and Jake Tapper Debate Biden - YouTube
May 20, 2025
"One of Us Didn't Miss the Biggest Story of the Century": Megyn Kelly and Jake Tapper Debate Biden
YouTube
"One of us didn't miss the biggest story of the century": Megyn Kelly and Jake Tapper debate Biden. LIKE & SUBSCRIBE for new videos everyday: https://bit.ly/... |
| | Kekius Maximus on X: "We are coming for those who organized the violence & death threats against Tesla. Remember this statement." / X
May 20, 2025 |
| | From Flocking Birds to Flickering Memories: Unveiling a Hidden Resonance?
May 20, 2025
From Flocking Birds to Flickering Memories: Unveiling a Hidden Resonance? |
| | How the human brain is like a murmuration of starlings | Aeon Essays
May 20, 2025
How the human brain is like a murmuration of starlings | Aeon Essays
Aeon
The brain is much less like a machine than it is like the murmurations of a flock of starlings or an orchestral symphony
When thousands of starlings swoop and swirl in the evening sky, creating patterns called murmurations, no single bird is choreographing this aerial ballet. Each bird follows simple rules of interaction with its closest neighbours, yet out of these local interactions emerges a complex, coordinated dance that can respond swiftly to predators and environmental changes. This same principle of emergence â where sophisticated behaviours arise not from central control but from the interactions themselves â appears across nature and human society. Consider how market prices emerge from countless individual trading decisions, none of which alone contains the ârightâ price. Each trader acts on partial information and personal strategies, yet their collective interaction produces a dynamic system that integrates information from across the globe. Human language evolves through a similar process of emergence. No individual or committee decides that âLOLâ should enter common usage or that the meaning of âcoolâ should expand beyond temperature (even in French-speaking countries). Instead, these changes result from millions of daily linguistic interactions, with new patterns of speech bubbling up from the collective behaviour of speakers. These examples highlight a key characteristic of highly interconnected systems: the rich interplay of constituent parts generates properties that defy reductive analysis. This principle of emergence, evident across seemingly unrelated fields, provides a powerful lens for examining one of our eraâs most elusive mysteries: how the brain works. The core idea of emergence inspired me to develop the concept I call the entangled brain: the need to understand the brain as an interactionally complex system where functions emerge from distributed, overlapping networks of regions rather than being localised to specific areas. Though the framework described here is still a minority view in neuroscience, weâre witnessing a gradual paradigm transition (rather than a revolution), with increasing numbers of researchers acknowledging the limitations of more traditional ways of thinking. Complexity science is an interdisciplinary field that studies systems composed of many interacting components whose collective behaviours give rise to collective properties â phenomena that cannot be fully explained by analysing individual parts in isolation. These systems, such as ecosystems, economies or â as we will see â the brain, are characterised by nonlinear dynamics, adaptability, self-organisation, and networked interactions that span multiple spatial and temporal scales. Before exploring the ideas leading to the entangled brain framework, letâs revisit some of the historical developments of the field of neuroscience to set the stage. In 1899, CĂ©cile and Oskar Vogt, aged 24 and 29 respectively, arrived in Berlin to establish the Neurological Centre, initially a private institution for the anatomical study of the human brain that in 1902 was expanded to the Neurobiological Laboratory, and then the Kaiser Wilhelm Institute for Brain Research in 1914. CĂ©cile Vogt was one of only two women in the entire institute. (In Prussia, until 1908, women were not granted access to regular university education, let alone the possibility to have a scientific career.) She obtained her doctoral degree from the University of Paris in 1900, while her husband Oskar obtained a doctorate for his thesis on the corpus callosum from the University of Jena in 1894. In 1901, Korbinian Brodmann, who had concluded his doctorate in Leipzig in 1898, joined the group headed by the Vogts and was encouraged by them to undertake a systematic study of the cells of the cerebral cortex using tissue sections stained with a new cell-marking method. (The cortex is the outer brain surface with grooves and bulges; the subcortex comprises other cell masses that sit underneath.) The Vogts, and Brodmann working separately, were part of a first wave of anatomists trying to establish a complete map of the cerebral cortex, with the ultimate goal of understanding how brain structure and function are related. In a nutshell, where does a mental function such as an emotion reside in the brain? Neurons â a key cell type of the nervous system â are diverse, and several cell classes can be determined based on both their shape and size. Researchers used these properties, as well as spatial differences in distribution and density, to define the boundaries between potential sectors. In this manner, Brodmann subdivided the cortex into approximately 50 regions (also called areas) per hemisphere. The Vogts, in contrast, thought that there might be more than 200 of them, each with its own distinguishing cytoarchitectonic pattern (that is, cell-related organisation). It is an idea that comes close to being an axiom in biology: function is tied to structure Brodmannâs map is the one that caught on and stuck, likely because neuroanatomists opposed too vigorous a subdivision of the cortex, and today students and researchers alike still refer to cortical parts by invoking his map. Although relatively little was known about the functions of cortical regions at the time, Brodmann believed that his partition identified âorgans of the mindâ â he was convinced that each cortical area subserved a particular function. Indeed, when he joined the Vogtsâ laboratory, they had encouraged him to try to understand the organisation of the cortex in light of their main thesis that different cytoarchitectonically defined areas are responsible for specific physiological responses and functions. There is a deep logic that the Vogts and Brodmann were following. In fact, it is an idea that comes close to being an axiom in biology: function is tied to structure. In the case at hand, parts of the cortex that are structurally different (contain different cell types, cell arrangements, cell density, and so on) carry out different functions. In this manner, they believed they could understand how function is individuated from a detailed characterisation of the underlying microanatomy. They were in search of the functional units of the cortex â where the function could be sensory, motor, cognitive and so on. Unlike other organs of the body that have more clear-cut boundaries, the cortexâs potential subdivisions are not readily apparent at a macroscopic level. One of the central goals of many neuroanatomists in the first half of the 20th century was to investigate such âorgans of the mindâ (an objective that persists to this day). A corollary of this research programme was that individual brain regions â say, Brodmannâs area 17 in the back of the brain â implemented specialised mechanisms, in this case related to processing visual sensory stimuli. Therefore, it was vital to understand the operation of individual parts since the area/region was the rightful mechanistic unit to understand how the nervous system works. Neuroscientistsâ interest in brain regions was motivated by the notion that each region executes a particular function. For example, we could say that the function of the primary visual cortex is visual perception, or perhaps a more basic visual mechanism, such as detecting âedgesâ (sharp light-to-dark transitions) in images. The same type of description can be applied to other sensory and motor areas of the brain. This exercise becomes considerably less straightforward for brain areas that are much less sensory or motor, as their workings become exceedingly difficult to determine and describe. Nevertheless, in theory, we can imagine extending the idea to all parts of the brain. The result of this endeavour would be a list of area-function pairs: L = {(A1, F1), (A2, F2), ⊠, (An, Fn)}, where areas A implement functions F. There is, however, a serious problem with this endeavour. To date, no such list has been systematically generated. Indeed, current knowledge strongly suggests that this strategy will not yield a simple area-function list. What may start as a simple (A1, F1) pair, is gradually revised as research progresses, and eventually grows to include a list of functions, such that area A1 participates in a series of functions F1, F2, ⊠, Fk. From a basic one-to-one A1 â F1 mapping, the picture evolves to a one-to-many mapping: A1 â {F1, F2, ⊠, Fk}. If the mapping between structure and function is not one-to-one, then what kind of system is the brain? This is the question the entangled brain concept sets out to tackle. Itâs useful to consider two types of information: anatomical and functional. Letâs start with the brainâs massive combinatorial anatomical connectivity. Neurons are constantly exchanging electrochemical signals with one another. Signalling between them is aided by physical cell extensions, called axons, that protrude beyond the cell body for distances from less than 1 mm to around 15 mm in the central nervous system. Axons travelling longer distances typically bundle together along what are called white-matter tracts to distinguish them from tissue composed of neuronal cell bodies, which is called grey matter. Anatomical connectivity, then, can be viewed as a system of roads and highways that supports cell signalling in the brain. While most connections are local, the brain also maintains an impressive network of medium- and long-distance pathways. To give a rough sense of the dimensions involved, axonal lengths within local brain circuits (such as those within a single Brodmann area) have lengths from less than 1 mm to just under 1 cm. Connections between adjacent and nearby regions can extend between 0.5 to 4 cm, and connections between areas in different lobes, such as between the frontal and the occipital lobes, can reach 15 cm or more. Although details vary across mammalian species, thereâs evidence that the brains of macaque monkeys (a species that has a brain organisation resembling that of humans) are densely interconnected. For example, when scientists looked at any two regions in the cortex, they found that about 60 per cent of the time thereâs a direct connection between them (although the strength of the pathway decreases between regions that are farther apart). Notably, the cortex organises medium- and long-distance communication through special regions that act like major transportation hubs, routing and coordinating signals across the entire cortex, much like how major airports serve as central connection points in the global air transportation network. But thatâs just part of the story. Beyond the extensive interconnections found in the cortex, there are multiple âconnectional systemsâ that weave together regions even further. The entire cortex connects to deeper brain structures. We can think of the brain as having distinct sectors. Simplifying somewhat, these are the cortex, the subcortical parts that are physically beneath the cortex in humans, and the brainstem. In the 1980s, it became clear that the cortex and subcortex are part of extensive connectional loops â from cortex to subcortex back to cortex. We now know that the multiple sectors are amply interlinked. What is more, a subcortical structure such as the thalamus, viewed in the past as a relatively passive steppingstone conveying signals to the cortex, is so sweepingly interconnected with the entire cortex that it is perhaps better to think in terms of a cortical-thalamic system. Even subcortical areas believed to mainly control basic functions, like the hypothalamus, which regulates hunger and body temperature among others, have widespread connections throughout the brain. This creates an incredibly intricate connectional web where signals can travel between disparate parts through multiple routes, hence the idea of âcombinatorialâ connectivity. What are the implications of the connectional organisation of the brain? The dense nexus of pathways allows for remarkable flexibility in how the brain processes information and controls behaviour. Signals of all types can be exchanged and integrated in multiple ways. All this potential mixing strongly challenges how we traditionally think of the mind and brain in terms of simplistic labels such as âperceptionâ, âcognitionâ, âemotionâ and âactionâ. I will return to this point later, but the standard view is further challenged by a second principle of brain organisation: highly distributed functional coordination. Groups of neurons that fire in a coherent fashion indicate that they are functionally interrelated The Roman Empireâs roads, critical to its success, were extensive enough to circle the globe about twice over. In addition to obvious military applications, the road network supported trade, as well as cultural and administrative integration. These economic and cultural relationships and coordination between disparate parts of the empire were sustained by the incredible physical infrastructure known as the cursus publicus. Likewise, in the brain we need to move beyond the anatomical domain (the roads) to functional properties (such as economic and cultural relationships between different parts of the Roman Empire), all the more because neuroscientists themselves often focus too much on anatomical features. In the brain, functional relationships between neuronal signals are detected across multiple spatial scales â from the local scale of neurons within a brain area to larger scales involving signals originating from the grey matter of different lobes (such as the frontal and parietal lobes, many centimetres apart). By signals, we mean the electrical activity of neurons that is directly recorded via microelectrodes inserted into grey matter (ie, neuronal tissue), measured indirectly when using functional magnetic resonance imaging (fMRI) in humans, or possibly via other measurement techniques. What kinds of functional relationships are detected? An important one is that signals from different sites exhibit synchronised neuronal activity. This is notable because groups of neurons that fire in a coherent fashion indicate that they are functionally interrelated, and potentially part of a common process. Different types of signal coordination are believed to reflect processes such as attention and memory, among others. Additional types of relationships are detected mathematically, too, such as whether the strength of the response in one brain area is related to the temporal evolution of signals in a disparate location. In the brain, we identify signal relationships that are indicators of joint functions between regions, much like detecting cultural exchanges between separate parts of the Roman Empire via evidence of shared artefacts or language patterns. When signals are measured from two sites within a local patch (say, a few millimetres across), it is not too surprising to find notable functional relationships between them (eg, their neuronal activity is correlated), as neurons likely receive similar inputs and are locally connected. Yet, we also observe functional relationships between neuronal signals from locations that are situated much farther apart and, critically, between brain parts that are not directly anatomically connected â there is no direct axonal connection between them. How does this happen? There is evidence that signal coordination between regions depends more on the total number of possible communication routes between them than on the existence of direct connections between points A and B. For example, although regions A and B are not anatomically connected, they both connect to region C, which thus serves as a bridge between them. Even more circuitous paths can unite A and B, much like flying between two cities that have no direct flights and require multiple layovers. In such a manner, the brain creates functional partnerships that take advantage of all possible ways through its intricate pathways. This helps explain how the brain can be so remarkably flexible, sustaining different partnerships between regions depending on what weâre doing, thinking or feeling at any given moment. When we consider the highways traversing the brain and how signals establish behaviourally relevant relationships across the central nervous system, we come to an important insight. In a highly interconnected system, to understand function, we need to shift away from thinking in terms of individual brain regions. The functional unit is not to be found at the level of the brain area, as commonly proposed. Instead, we need to consider neuronal ensembles distributed across multiple brain regions, much like the murmuration of starlings forms a single pattern from the collective behaviour of individual birds. There are many instances of distributed neuronal ensembles. Groups of neurons extending over cortical (say, prefrontal cortex and hippocampus) and subcortical (say, amygdala) regions form circuits that are important for learning what is threatening and what is safe. Such multiregion circuits are ubiquitous; fMRI studies in humans have shown that the brain is organised in terms of large-scale networks that stretch across the cortex as well as subcortical territories. For example, the so-called âsalience networkâ (suggested to be engaged when significant events are encountered) spans brain regions in the frontal and parietal lobes, among others, and can also be viewed as a neuronal ensemble. Whether we consider ensembles in the case of brain circuits or large-scale networks, the associated neuronal groupings should be viewed as strongly context dependent and dynamic. That is to say, they are not fixed entities but instead form dynamically to meet current situational requirements. Accordingly, they will dynamically assemble and disassemble as per behavioural needs. The implication of this view is that whereas brain regions A, B and C might generally be active together in dealing with a specific type of behaviour, in some contexts, we will also observe an ensemble that encompasses region D, or instead the ensemble {A, C, D} that meets slightly different requirements. In all, neuronal ensembles constitute an extremely malleable functional unit. Think of how an orchestra works during a complex piece of music. The string section might split into different groups, with some violins joining the woodwinds for one musical phrase while others harmonise with the cellos. Later, these groupings shift completely for a different passage. The brain works in a related way: rather than recruiting fixed regions, it forms flexible aggregations that assemble and disassemble based on what weâre doing, thinking or feeling. This builds on what we learned about the brainâs extensive physical connections and the coordinated activity across regions. These features make the formation of ensembles possible. Brain regions can participate in multiple networks simultaneously and shift their roles as needed As is common in science, these ideas have a long genealogy. In 1949, the Canadian psychologist Donald Hebb proposed that the brainâs ability to generate coherent thoughts derives from the spatiotemporal orchestration of neuronal activity. He hypothesised that a discrete, strongly interconnected group of active neurons called the cell assembly represents a distinct mental entity, such as a thought or an emotion. Yet, these ideas have taken a long time to mature, not least due to technical limitations in measuring signals simultaneously across the brain, and the relative insularity of experimental neuroscience from other disciplines, such as computer science, mathematics and physics. Just as a symphony emerges from both the individual instruments and how they play together, brain function emerges from both the regions themselves and their dynamic interactions. Scientists are finding that we canât understand complex mental processes by studying individual brain regions in isolation, any more than we could understand a symphony by listening to each instrument separately. Whatâs particularly fascinating is that these brain assemblages overlap and change over time. Just as a violin might be part of the string section in one moment and join a smaller ensemble in the next, brain regions can participate in multiple networks simultaneously and shift their roles as needed. But note that, in this view, even brain networks arenât seen as constituted of fixed sets of regions; instead, they are dynamic coalitions that form and dissolve based on the brainâs changing needs. This flexibility helps explain how the brain can support such a wide range of complex behaviours using a limited number of regions. Categories such as perception, cognition, action, emotion and motivation are not only the titles of introductory textbooks, but reflect how psychologists and neuroscientists conceptualise the organisation of the mind and brain. They seek to subdivide the brain into territories that have preferences for processes that support a specific type of mental activity. Some parts handle perception, such as the back of the head and its involvement in vision, or the front of the brain and its role in cognition. And so on. The decomposition of the mind-brain adopted by many neuroscientists follows an organisation that is called modular. Modularity here refers to the idea that the brain consists of specialised, relatively independent components or modules that each handle specific mental functions, much like distinct parts in a machine that work together but perform separate operations. Yet, a modular organisation, popular as it is among neuroscientists, is inconsistent with the principles of the anatomical and functional neuroarchitecture discussed here. The brainâs massive combinatorial connectivity and highly distributed functional coordination defy clean compartmentalisation. The extensive bidirectional pathways spanning the entire brain create crisscrossing connectional systems that dissolve potential boundaries between traditional mental domains (cognition, emotion, etc). Anxiety, PTSD, depression and so on should be viewed as system-level entities Brain regions dynamically affiliate with multiple networks in a context-dependent manner, forming coalitions that assemble and disassemble based on current demands. This interactional complexity means that functions arenât localised to discrete modules but emerge from decentralised coordination across multiregion assemblies. The properties that emerge from these interactions cannot be reduced to individual components, making a strict modular framework inadequate for capturing the brainâs entangled nature. Why is the brain so entangled, and thus so unlike human-engineered systems? Brains have evolved to provide adaptive responses to challenges faced by living beings, promoting survival and reproduction â not to solve isolated cognitive or emotional problems. In this context, even the mental vocabulary of neuroscience and psychology (attention, cognitive control, fear, etc), with origins disconnected from the study of animal behaviour, provides problematic theoretical pillars. Instead, approaches inspired by evolutionary considerations provide better scaffolds to sort out the relationships between brain structure and function. The implications of the entangled brain are substantial for the understanding of healthy and unhealthy brain processes. It is common for scientists to seek a single, unique source of psychological distress. For example, anxiety or PTSD is the result of an overactive amygdala; depression is caused by deficient serotonin provision; drug addiction is produced by dopamine oversupply. But, according to the ideas described here, we should not expect unique determinants for psychological states. Anxiety, PTSD, depression and so on should be viewed as system-level entities. Alterations across several brain circuits, spanning multiple brain regions, are almost certainly involved. As a direct consequence, healthy or unhealthy states should not be viewed as emotional, motivational or cognitive. Such classification is superficial and neglects the intermingling that results from anatomical and functional brain organisation. We should also not expect to find a single culprit, not even at the level of distributed neuronal ensembles. The conditions in question are too heterogeneous and varied across individuals; they wonât map to a single alteration, including at the distributed level. In fact, we should not expect a temporally constant type of disturbance, as brain processes are highly context-dependent and dynamic. Variability in the very dynamics will contribute to how mental health experiences are manifested. In the end, we need to stop seeking simple explanations for complex mind-brain processes, whether they are viewed as healthy or unhealthy. Thatâs perhaps the most general implication of the entangled brain view: that the functions of the brain, like the murmurations of starlings, are more complicated and more mysterious than its component parts.|
| | Imagen - Google DeepMind
May 20, 2025
Imagen
Google DeepMind
Imagen 4 is our best text-to-image model yet, with photorealistic images, near real-time speed, and sharper clarity â to bring your imagination to life. |