Welcome to DU! The truly grassroots left-of-center political community where regular people, not algorithms, drive the discussions and set the standards. Join the community: Create a free account Support DU (and get rid of ads!): Become a Star Member Latest Breaking News Editorials & Other Articles General Discussion The DU Lounge All Forums Issue Forums Culture Forums Alliance Forums Region Forums Support Forums Help & Search

hatrack

(61,212 posts)
Thu Nov 21, 2024, 07:35 AM Nov 21

Projected 10-30% Increase In Natural Gas Power Generation Because Muh AI Datacenters Must Be Fed!!! Oh, And Coal, Too

The explosion of data center development across the United States to serve the artificial intelligence industry is threatening decades of progress cutting greenhouse gas emissions, as utilities lay plans for scores of new gas power plants to meet soaring electricity demand.

EDIT

As part of the U.S. pledge to cut its total greenhouse gas emissions in half by the end of the decade, compared to 2005 levels, President Joe Biden has vowed to eliminate all power grid emissions by 2035. But there are 220 new, gas-burning power plants in various stages of development nationwide, according to the market data firm Yes Energy. Most of those plants are targeted to come online before 2032. Each has a lifespan of 25 to 40 years, meaning most would not be fully paid off — much less shut down — before federal and state target dates for transitioning power grids to cleaner electricity.

EDIT

The power sector was a bright spot in cutting emissions. They fell steadily over the last few decades, even as electricity use grew. A big factor was the steep drop in coal burning. Coal powered more than half of U.S. electricity in 1990, according to the University of Maryland’s Center for Global Sustainability. This year, it is less than 20 percent.

But even coal is making a comeback amid the data center boom. In several states, planned retirements of coal plants are already on hold. A Duke Energy executive told Bloomberg News that it will reexamine plans to burn less coal in Indiana if the Trump administration rescinds power plant emission rules. Data centers are behind two-thirds of the new demand for power in the Omaha region, where the Omaha Public Power District has delayed the closure of a major coal plant and is bringing online two large new gas plants. The utility said in a statement that it might purchase green energy credits, called “carbon offsets” in the future “as part of its overall plan to reach net-zero carbon.”

EDIT

https://www.washingtonpost.com/climate-environment/2024/11/19/ai-cop29-climate-data-centers/

10 replies = new reply since forum marked as read
Highlight: NoneDon't highlight anything 5 newestHighlight 5 most recent replies

highplainsdem

(52,845 posts)
1. Disgusting. And this is mostly about that nearly worthless genAI used for student cheating and other
Thu Nov 21, 2024, 07:55 AM
Nov 21

fraud, for AI slop fake art and video flooding the internet, for AI slop fake books/articles and music, for AI-generated-and-spread misinformation and disinformation, for deepfakes including deepfake porn.

The sheer stupidity of the genAI boom, given all the harm it does, is almost unbelievable.

But it entertains and deceives the gullible, lets students think they can cheat their way through school, and lets greedy company owners and execs think they can dump most of their employees so there's more money for those at the top. It almost always brings out the worst in people.

But it's new tech, so we're supposed to welcome it and adapt to it.

hatrack

(61,212 posts)
2. Yes, let us all kneel and praise Shiny New Thing!!
Thu Nov 21, 2024, 08:23 AM
Nov 21

It's Shiny!! It's New!! It's Thing!!

We will now proceed to kill ourselves for pixels. Literally.

NNadir

(34,847 posts)
3. My son's Ph.D work involves convolutional neural networks for...
Thu Nov 21, 2024, 10:18 AM
Nov 21

...image processing of TEM for printed steels.

I'll let him know it's immoral and pornographic to conduct his work.

I'll also be sure to let all the folks i know working on protein dynamics simulations to understand that their work is immoral and pornographic.

You learn something every day.

It is possible of course to generate electricity without releasing dangerous fossil fuel waste, but it's had rather dishonest bad press.

highplainsdem

(52,845 posts)
4. Don't misread my post. I said nearly worthless.There are some good uses for AI, but they don't excuse
Thu Nov 21, 2024, 10:44 AM
Nov 21

the harm other uses are causing.

highplainsdem

(52,845 posts)
6. GenAI models are almost all built on abuse - theft of intellectual property - and the dissemination and
Thu Nov 21, 2024, 11:05 AM
Nov 21

marketing of those tools mostly encourages those harmful uses.

I doubt valid, non-harmful scientific and medical uses of genAI account for more than a tiny percentage of the money, energy and water it uses.

And the world isn't going to benefit much from a few scientific advances if the population is dumbed down, culture is stolen so corporate-owned mimicry is sold back to us, most jobs are lost, surveillance is worsened, and the environment is seriously harmed.

NNadir

(34,847 posts)
7. Well, should I assume you are an expert and can provide...
Thu Nov 21, 2024, 12:02 PM
Nov 21

...some support for this contention?

What percentage of computational data tools is used for frivolous purposes?

DU certainly uses data services. I'm sure there are people in the coming fascist government would consider it unworthy of water and electricity.

highplainsdem

(52,845 posts)
8. DU doesn't use genAI. There are lots of articles out there on genAI increasing power demands, and
Thu Nov 21, 2024, 12:30 PM
Nov 21

a number have been posted here.

Again, I was not including scientific research among the harms done by genAI (but I will point out that after all the hype about AlphaFold, it was revealed later that with all the hallucinations, traditional checking had to be done - https://biosciences.lbl.gov/2024/01/23/researchers-assess-alphafold-model-accuracy/ ).

The harms I mentioned to society are not "frivolous" and it's wrong to pretend they are, just as it was wrong for you to post above that I'd said that your son's work was "immoral and pornographic" when I'd said nothing remotely like that. And please don't try to confuse data center construction that tech companies have said they need for genAI with computing in general. Google's AI search, for instance, uses 10x the electricity of regular search (and often produces less reliable results, and diverts traffic from websites the data is taken from, hurting the internet).

NNadir

(34,847 posts)
9. Oh. I see. We have to make distinctions about how servers are used. I'm a bad guy, I guess. I use Google Scholar...
Thu Nov 21, 2024, 05:26 PM
Nov 21

...often to support my scientific work and given the richness of the literature, its vast scope, I certainly wish I had something like CCU-Lama, which I described in the science forum: CCU-Llama.

Who's going to monitor the "correct use" of servers? The Trump administration?

It's funny, because just the other day I was having a conversation with another scientist about whether we should always trust the sophisticated software we use, both on line and in house, to interpret mass spec data. I'm so old, of course, that I remember sitting with a pencil and paper and calculating the mass of potential fragments and then looking at the data to see if such a mass was there visually. I could spend a week or more with a complex compound in that way. Now, in less than a few minutes, I can see all the PTMs and sequences from a very large protein, no trouble at all. I almost never find a result that seems to be invalidated by experiment, unless it involves an isobaric species, and now their are ways around that as well. The public servers, like Uniprot, do, must do something very much like AI, although honestly I don't know how it works, just that it's as fast as hell and I have direct experience with it being perfectly correct on multiple occasions, for example, finding the exact correct species of associated with a highly conserved protein found across many living things when I was blinded. And trust me, the protein in question is highly conserved across a wide range of species, from single cell organisms to human beings.

However, we can and do, set false discovery rates in the use of the software, and that is designed to establish the error parameters. The fact that there is a "false discovery rate," means that we have to be careful with the data, it is not determinative so much as (highly) suggestive.

Your link, by the way, refers to an article referring to a paper, this one: Terwilliger, T.C., Liebschner, D., Croll, T.I. et al. AlphaFold predictions are valuable hypotheses and accelerate but do not replace experimental structure determination. Nat Methods 21, 110–116 (2024). It certainly doesn't discount the value of Alphafold, remarking that it often produces results that are remarkably similar to crystallized proteins, which, as the authors note does not necessarily correspond to protein structure in vivo.

To wit:

...Both experimentally determined protein structures and predicted models have important limitations11,13,14. Proteins are flexible and dynamic, and their distributions of conformations depend on temperature, solution conditions and binding of ligands or other proteins (including crystal contacts in the case of crystallography)15. A model of a high-resolution crystal structure can accurately represent the dominant conformation(s) present in a crystal in a particular environment11, but the structure may differ under another set of conditions14. Artificial intelligence (AI)-based models can in many cases be very accurate; however, they do not yet take into account the presence of ligands, covalent modifications or environmental factors, and take protein–protein interactions and multiple conformations into account in a limited way1,2,16,17...


From the conclusion of the paper:

...Despite their limitations, AlphaFold predictions are already changing the way that hypotheses about protein structures are generated and tested1,2,5,6. Indeed, even though not all parts of AlphaFold predictions are accurate, they provide plausible hypotheses that can suggest mechanisms of action and allow designing of experiments with specific expected outcomes. Using these predictions as starting hypotheses can also greatly accelerate the process of experimental structure determination27,34,35. AlphaFold predictions often have very good stereochemical characteristics, making them excellent hypotheses for local structural features. For example, for the 102 structures analyzed here, the mean percentage of residues with ‘favored’ Ramachandran configurations was 98%, greater than that of the corresponding deposited models (97%), and the mean percentage of side-chain conformations classified as outliers was just 0.2%, compared with 1.5% for deposited models31. Such AlphaFold predictions with highly plausible geometry could be used in later stages of experimental structure determination as potential conformations for segments of structure that are not fully clear in experimental density maps...


To me, this doesn't read like a dismissal of Alphafold, but rather a wise cautionary suggestion as to how it should be used.

Of course, in the history of science, there have been many calculations that proved to not hold up to experimental data. Experiment always prevails over theory, or should anyway. One should always check results against theory, and in fact, that is what automated machine learning does, compares data with theory to determine whether theory holds, and adjusts the theory accordingly, but yes, the output of this process needs human review.

None of this means that there is something corrupt or illegitimate with data centers. My remark about my son's work was intended not to be "right" or "wrong," but rather to suggest that we ought to be careful with how we judge technologies. Sure there are kids who produce term papers on ChatGPT. That doesn't mean that ChatGPT is evil. I have an assistant, not a scientist, who brings me text from it regularly, with my knowledge. It often fails the Turing test, but recognizing that it fails the Turing test often, it can help unblock writer's block. We never use it directly in our reports, but it suggests, not defines, a path.

OKIsItJustMe

(21,020 posts)
10. TED: AI is dangerous, but not for the reasons you think
Thu Nov 21, 2024, 08:12 PM
Nov 21
https://www.ted.com/talks/sasha_luccioni_ai_is_dangerous_but_not_for_the_reasons_you_think

AI won't kill us all — but that doesn't make it trustworthy. Instead of getting distracted by future existential risks, AI ethics researcher Sasha Luccioni thinks we need to focus on the technology's current negative impacts, like emitting carbon, infringing copyrights and spreading biased information. She offers practical solutions to regulate our AI-filled future — so it's inclusive and transparent.
Latest Discussions»Issue Forums»Environment & Energy»Projected 10-30% Increase...