(damn)

kidfury

Duke status
Oct 14, 2017
24,647
10,478
113
"It was clear to anyone who had an ounce of appreciation for what the job of the presidency entails, to anyone who respected the constitutional order of our government, to anyone who worried about the health and safety of this nation, to anyone with a moral compass, to anyone who prizes the common sense of purpose that great leaders can summon, that Donald J. Trump had no business anywhere near the presidency."

I don't think shex knew. I don't think gromsdad knew. I don't think Joe Rogan knew.

 

sirfun

Duke status
Apr 26, 2008
17,539
6,873
113
U.S.A.
By Noam Chomsky, Ian Roberts and Jeffrey Watumull
Dr. Chomsky and Dr. Roberts are professors of linguistics. Dr. Watumull is a director of artificial intelligence at a science and technology company.

Jorge Luis Borges once wrote that to live in a time of great peril and promise is to experience both tragedy and comedy, with “the imminence of a revelation” in understanding ourselves and the world. Today our supposedly revolutionary advancements in artificial intelligence are indeed cause for both concern and optimism. Optimism because intelligence is the means by which we solve problems. Concern because we fear that the most popular and fashionable strain of A.I. — machine learning — will degrade our science and debase our ethics by incorporating into our technology a fundamentally flawed conception of language and knowledge.
OpenAI’s ChatGPT, Google’s Bard and Microsoft’s Sydney are marvels of machine learning. Roughly speaking, they take huge amounts of data, search for patterns in it and become increasingly proficient at generating statistically probable outputs — such as seemingly humanlike language and thought. These programs have been hailed as the first glimmers on the horizon of artificial general intelligence — that long-prophesied moment when mechanical minds surpass human brains not only quantitatively in terms of processing speed and memory size but also qualitatively in terms of intellectual insight, artistic creativity and every other distinctively human faculty.
That day may come, but its dawn is not yet breaking, contrary to what can be read in hyperbolic headlines and reckoned by injudicious investments. The Borgesian revelation of understanding has not and will not — and, we submit, cannot — occur if machine learning programs like ChatGPT continue to dominate the field of A.I. However useful these programs may be in some narrow domains (they can be helpful in computer programming, for example, or in suggesting rhymes for light verse), we know from the science of linguistics and the philosophy of knowledge that they differ profoundly from how humans reason and use language. These differences place significant limitations on what these programs can do, encoding them with ineradicable defects.
It is at once comic and tragic, as Borges might have noted, that so much money and attention should be concentrated on so little a thing — something so trivial when contrasted with the human mind, which by dint of language, in the words of Wilhelm von Humboldt, can make “infinite use of finite means,” creating ideas and theories with universal reach.

The human mind is not, like ChatGPT and its ilk, a lumbering statistical engine for pattern matching, gorging on hundreds of terabytes of data and extrapolating the most likely conversational response or most probable answer to a scientific question. On the contrary, the human mind is a surprisingly efficient and even elegant system that operates with small amounts of information; it seeks not to infer brute correlations among data points but to create explanations.
For instance, a young child acquiring a language is developing — unconsciously, automatically and speedily from minuscule data — a grammar, a stupendously sophisticated system of logical principles and parameters. This grammar can be understood as an expression of the innate, genetically installed “operating system” that endows humans with the capacity to generate complex sentences and long trains of thought. When linguists seek to develop a theory for why a given language works as it does (“Why are these — but not those — sentences considered grammatical?”), they are building consciously and laboriously an explicit version of the grammar that the child builds instinctively and with minimal exposure to information. The child’s operating system is completely different from that of a machine learning program.
Indeed, such programs are stuck in a prehuman or nonhuman phase of cognitive evolution. Their deepest flaw is the absence of the most critical capacity of any intelligence: to say not only what is the case, what was the case and what will be the case — that’s description and prediction — but also what is not the case and what could and could not be the case. Those are the ingredients of explanation, the mark of true intelligence.
Here’s an example. Suppose you are holding an apple in your hand. Now you let the apple go. You observe the result and say, “The apple falls.” That is a description. A prediction might have been the statement “The apple will fall if I open my hand.” Both are valuable, and both can be correct. But an explanation is something more: It includes not only descriptions and predictions but also counterfactual conjectures like “Any such object would fall,” plus the additional clause “because of the force of gravity” or “because of the curvature of space-time” or whatever. That is a causal explanation: “The apple would not have fallen but for the force of gravity.” That is thinking.
The crux of machine learning is description and prediction; it does not posit any causal mechanisms or physical laws. Of course, any human-style explanation is not necessarily correct; we are fallible. But this is part of what it means to think: To be right, it must be possible to be wrong. Intelligence consists not only of creative conjectures but also of creative criticism. Human-style thought is based on possible explanations and error correction, a process that gradually limits what possibilities can be rationally considered. (As Sherlock Holmes said to Dr. Watson, “When you have eliminated the impossible, whatever remains, however improbable, must be the truth.”)

But ChatGPT and similar programs are, by design, unlimited in what they can “learn” (which is to say, memorize); they are incapable of distinguishing the possible from the impossible. Unlike humans, for example, who are endowed with a universal grammar that limits the languages we can learn to those with a certain kind of almost mathematical elegance, these programs learn humanly possible and humanly impossible languages with equal facility. Whereas humans are limited in the kinds of explanations we can rationally conjecture, machine learning systems can learn both that the earth is flat and that the earth is round. They trade merely in probabilities that change over time.
For this reason, the predictions of machine learning systems will always be superficial and dubious. Because these programs cannot explain the rules of English syntax, for example, they may well predict, incorrectly, that “John is too stubborn to talk to” means that John is so stubborn that he will not talk to someone or other (rather than that he is too stubborn to be reasoned with). Why would a machine learning program predict something so odd? Because it might analogize the pattern it inferred from sentences such as “John ate an apple” and “John ate,” in which the latter does mean that John ate something or other. The program might well predict that because “John is too stubborn to talk to Bill” is similar to “John ate an apple,” “John is too stubborn to talk to” should be similar to “John ate.” The correct explanations of language are complicated and cannot be learned just by marinating in big data.
Perversely, some machine learning enthusiasts seem to be proud that their creations can generate correct “scientific” predictions (say, about the motion of physical bodies) without making use of explanations (involving, say, Newton’s laws of motion and universal gravitation). But this kind of prediction, even when successful, is pseudoscience. While scientists certainly seek theories that have a high degree of empirical corroboration, as the philosopher Karl Popper noted, “we do not seek highly probable theories but explanations; that is to say, powerful and highly improbable theories.”
The theory that apples fall to earth because that is their natural place (Aristotle’s view) is possible, but it only invites further questions. (Why is earth their natural place?) The theory that apples fall to earth because mass bends space-time (Einstein’s view) is highly improbable, but it actually tells you why they fall. True intelligence is demonstrated in the ability to think and express improbable but insightful things.
True intelligence is also capable of moral thinking. This means constraining the otherwise limitless creativity of our minds with a set of ethical principles that determines what ought and ought not to be (and of course subjecting those principles themselves to creative criticism). To be useful, ChatGPT must be empowered to generate novel-looking output; to be acceptable to most of its users, it must steer clear of morally objectionable content. But the programmers of ChatGPT and other machine learning marvels have struggled — and will continue to struggle — to achieve this kind of balance.
In 2016, for example, Microsoft’s Tay chatbot (a precursor to ChatGPT) flooded the internet with misogynistic and racist content, having been polluted by online trolls who filled it with offensive training data. How to solve the problem in the future? In the absence of a capacity to reason from moral principles, ChatGPT was crudely restricted by its programmers from contributing anything novel to controversial — that is, important — discussions. It sacrificed creativity for a kind of amorality.


 
  • Like
Reactions: hammies

sirfun

Duke status
Apr 26, 2008
17,539
6,873
113
U.S.A.


By Elizabeth Warren
Senator Warren is a Democrat from Massachusetts.

No one should be mistaken about what unfolded over the past few days in the U.S. banking system: These recent bank failures are the direct result of leaders in Washington weakening the financial rules.
In the aftermath of the 2008 financial crisis, Congress passed the Dodd-Frank Act to protect consumers and ensure that big banks could never again take down the economy and destroy millions of lives. Wall Street chief executives and their armies of lawyers and lobbyists hated this law. They spent millions trying to defeat it, and, when they lost, spent millions more trying to weaken it.
Greg Becker, the chief executive of Silicon Valley Bank, was one of the ‌many high-powered executives who lobbied Congress to weaken the law. In 2018, the big banks won. With support from both parties, President Donald Trump signed a law to roll back critical parts of Dodd-Frank. Regulators, including the Federal Reserve chair Jerome Powell, then made a bad situation worse, ‌‌letting financial institutions load up on risk.
Banks like S.V.B. ‌— which had become the 16th largest bank in the country before regulators shut it down on Friday ‌—‌ got relief from stringent requirements, basing their claim on the laughable assertion that banks like them weren’t actually “big” ‌and therefore didn’t need strong oversight. ‌

I fought against these changes. On the eve of the Senate vote in 2018, I warned‌, “Washington is about to make it easier for the banks to run up risk, make it easier to put our constituents at risk, make it easier to put American families in danger, just so the C.E.O.s of these banks can get a new corporate jet and add another floor to their new corporate headquarters.”
I wish I’d been wrong. But on Friday, S.V.B. executives were busy paying out congratulatory bonuses hours before the Federal Deposit Insurance Corporation‌‌ rushed in to take over their failing institution — leaving countless businesses and non‌profits with accounts at the bank alarmed that they wouldn’t be able to pay their bills and employees.
S.V.B. suffered from a toxic mix of risky management and weak supervision. For one, the bank relied on a concentrated group of tech companies with big deposits, driving an abnormally large ratio of uninsured deposits‌. This meant that weakness in a single sector of the economy could threaten the bank’s stability.
Instead of managing that risk, S.V.B. funneled these deposits into long-term bonds, making it hard for the bank to respond to a drawdown. S.V.B. apparently failed to hedge against the obvious risk of rising interest rates. This business model was great for S.V.B.’s short-term profits, which shot up by nearly 40 ‌percent over the last three years‌ — but now we know its cost.
S.V.B.’s collapse set off looming contagion that regulators felt forced to stanch, leading to their decision to dissolve Signature Bank. Signature had touted its F.D.I.C. insurance as it whipped up a customer base tilted toward risky cryptocurrency firms.

Had Congress and the Federal Reserve not rolled back the stricter oversight, S.V.B. and Signature would have been subject to stronger liquidity and capital requirements to withstand financial shocks. They would have been required to conduct regular stress tests to expose their vulnerabilities and shore up their businesses. But because those requirements were repealed, when an old-fashioned bank run hit S.V.B‌., the‌ bank couldn’t withstand the pressure — and Signature’s collapse was close behind.
On Sunday night, regulators announced they would ensure that all deposits at S.V.B. and Signature would be repaid 100 cents on the dollar. Not just small businesses and nonprofits, but also billion-dollar companies, crypto investors and the very venture capital firms that triggered the bank run on S.V.B. in the first place — all in the name of preventing further contagion.
Regulators have said that banks, rather than taxpayers, will bear the cost of the federal backstop required to protect deposits. We’ll see if that’s true. But it’s no wonder the American people are skeptical of a system that holds millions of struggling student loan borrowers in limbo but steps in overnight to ensure that billion-dollar crypto firms won’t lose a dime in deposits.
These threats never should have been allowed to materialize. We must act to prevent them from occurring again.
First, Congress, the White House‌ and banking regulators should reverse the dangerous bank deregulation of the Trump era. Repealing the 2018 legislation that weakened the rules for banks like S.V.B. must be an immediate priority for Congress. Similarly, ‌Mr. Powell’s disastrous “tailoring” of these rules has put our economy at risk, and it needs to end — ‌now. ‌
Bank regulators must also take a careful look under the hood at our financial institutions to see where other dangers may be lurking. Elected officials, including the Senate Republicans who, just days before S.V.B.’s collapse, pressed Mr. Powell to stave off higher capital standards, must now demand stronger — not weaker — oversight.
Second, regulators should reform deposit insurance so that both during this crisis and in the future, businesses that are trying to make payroll and otherwise conduct ordinary financial transactions are fully covered — while ensuring the cost of protecting outsized depositors is borne by those financial institutions that pose the greatest risk. Never again should large companies with billions in unsecured deposits expect, or receive, free support from the government.

Finally, if we are to deter this kind of risky behavior from happening again, it’s critical that those responsible not be rewarded. S.V.B. and Signature shareholders will be wiped out, but their executives must also be held accountable. Mr. Becker of S.V.B. took home $9.9 million in compensation last year, including a $1.5 million bonus for boosting bank profitability — and its riskiness. Joseph DePaolo of Signature got $8.6 million. We should claw all of that back, along with bonuses for other executives at these banks. Where needed, Congress should empower regulators to recover pay and bonuses. Prosecutors and regulators should investigate whether any executives engaged in insider trading ‌or broke other civil or criminal laws.
These bank failures were entirely avoidable if Congress and the Fed had done their jobs and kept strong banking regulations in place since 2018. S.V.B. and Signature are gone, and now Washington must act quickly to prevent the next crisis.
Elizabeth Warren is a United States senator for Massachusetts.
 

2surf

Duke status
Apr 12, 2004
15,282
2,050
113
California USA
www.allcare.com
Once upon a time…. little old lady was walking down the street dragging two large plastic garbage bags behind her. One of the bags was ripped and every once in a while a $20 fell out onto the sidewalk. Noticing this, a policeman stopped her, and said, “Ma’am, there are $20 bills falling out of that bag.” “Oh, really? Darn it!” said the little old lady. “I’d better go back and see if I can find them. Thanks for telling me, officer.” “Well, now, not so fast, little lady,” said the cop. “Where did you get all that money? You didn’t steal it, did you?” “Oh, no, no”, said the old lady. “You see, my backyard is right next to a Golf course. A lot of Golfers come and pee through a knot hole in my fence, right into my flower Garden. It used to really tick me off. Kills the flowers, you know. Then I thought, ‘why not make the best of it?’ So, now, I stand behind the fence by the knot hole, real quiet, with my hedge clippers. Every time some guy sticks it through my fence, I surprise him, grab hold of it and say, ‘O.K., buddy! Give me $20, or off it comes.’ “Well, that seems only fair,” said the cop, laughing. “OK. Good luck! Oh, by the way, what is in the other bag?” The old lady replies with a grin, “Well, not everybody pays.” Don’t mess with little old ladies!