Artwork

Το περιεχόμενο παρέχεται από το Asim Hussain and Green Software Foundation. Όλο το περιεχόμενο podcast, συμπεριλαμβανομένων των επεισοδίων, των γραφικών και των περιγραφών podcast, μεταφορτώνεται και παρέχεται απευθείας από τον Asim Hussain and Green Software Foundation ή τον συνεργάτη της πλατφόρμας podcast. Εάν πιστεύετε ότι κάποιος χρησιμοποιεί το έργο σας που προστατεύεται από πνευματικά δικαιώματα χωρίς την άδειά σας, μπορείτε να ακολουθήσετε τη διαδικασία που περιγράφεται εδώ https://el.player.fm/legal.
Player FM - Εφαρμογή podcast
Πηγαίνετε εκτός σύνδεσης με την εφαρμογή Player FM !

We Answer Your Questions Part 2

46:26
 
Μοίρασέ το
 

Manage episode 374394948 series 3336430
Το περιεχόμενο παρέχεται από το Asim Hussain and Green Software Foundation. Όλο το περιεχόμενο podcast, συμπεριλαμβανομένων των επεισοδίων, των γραφικών και των περιγραφών podcast, μεταφορτώνεται και παρέχεται απευθείας από τον Asim Hussain and Green Software Foundation ή τον συνεργάτη της πλατφόρμας podcast. Εάν πιστεύετε ότι κάποιος χρησιμοποιεί το έργο σας που προστατεύεται από πνευματικά δικαιώματα χωρίς την άδειά σας, μπορείτε να ακολουθήσετε τη διαδικασία που περιγράφεται εδώ https://el.player.fm/legal.
Host Chris Adams is joined by executive director of the Green Software Foundation, Asim Hussain as they dive into another mailbag session, bringing you the unanswered questions from the recent live virtual event on World Environment Day that was hosted by the Green Software Foundation on June 5 2023. Asim and Chris start with a discussion on the complexities of capturing energy consumed by memory, I/O operations, and network calls in the SCI. They explore real examples of measuring SCI on pipelines of CI/CD, showcasing projects like Green Metrics Tool and the Google Summer of Code Wagtail project. The conversation shifts to the carbon efficiency of GPUs and their environmental impact, touching on the tech industry's increasing hardware demands. They also address the potential for reusing cooling water from data centers, considering various cooling designs and their impact on water consumption.

Learn more about our people:

Find out more about the GSF:

Questions:
  • SCI is not capturing energy consumed by Memory , I/O operation, network calls etc. So what is your take on it? [3:27]
  • Does the GSF have any real examples of measuring SCI on pipelines of CI/CD? [7:15]
  • What is the carbon efficiency (or otherwise) of GPUs, say, onerous compute vector search? Is that good for the environment? [23:40]
  • Can the cooling water for data centers be reused? [36:28]

Resources:

If you enjoyed this episode then please either:


TRANSCRIPT BELOW:
Asim Hussain:
We couldn't have done this two years ago. I feel like so many pieces of the puzzle are now coming into place, where people can really very easily, with an hour's worth of work, measure the emissions of a piece of software. Basically, the dream world I have is in six months time, thousands of open source repos all over the world just drop a configuration file into the root of their repo, add a GitHub action, and they're measuring an SCI score for their product.
Chris Adams: Hello, and welcome to Environment Variables, brought to you by the Green Software Foundation. In each episode, we discuss the latest news and events surrounding green software. On our show, you can expect candid conversations with top experts in their field who have a passion for how to reduce the greenhouse gas emissions of software. I'm your host, Chris Adams.
Hello, and welcome to a special Mailbag episode of Environment Variables. This is our second installment of the format, where we bring you some of the questions that came up during the recent virtual event hosted by the Green Software Foundation on World Environment Day back in June. If you missed our first episode from this mailbag format, feel free to jump back when you'll see some of the other questions that came up and some of our eloquent and possibly not quite so eloquent answers as we ran through that. Today, we're going to run through a few more questions. And as ever, I'm joined by Asim Hussain, Executive Director of the Green Software Foundation.
Hi, Asim!
Asim Hussain: Hi Chris, how are you doing?
Chris Adams: I'm not too bad. A bit grey outside over here in Berlin, but otherwise not too bad really. Okay, before we dive into this, the questions we'll run through. If you're new to environment variables, every time we record one of these, we show extensive show notes with all the links to the papers and the sources and the things that we do have.
So if any of this has piqued your interest, there will be a link that you can jump into to basically continue your nerding out about this particular subject. And I think that's pretty much it. But before that, actually, maybe we should introduce ourselves, actually. Asim, I've introduced you as the executive director, but I suspect you might want to say a bit more about the Green Software Foundation, what else you do when you're not working at the GSF?
Asim Hussain: Thanks. Yes, I'm the Executive Director of the Green Software Foundation. I'm also the Chairperson of the Green Software Foundation, so I hold both roles right now. Yeah, I've basically been thinking about software and sustainability as Chris for quite a few years. Outside of the GSF, I'm also the Director of Green Software at Intel, where I try and work through an Intel strategy regarding, you know, greening of software and helping there.
Because, you know, the only people who buy stuff from Intel are people who run software.
Chris Adams: Thank you very much for that. We'll have this and better revelations and more insightful revelations coming ahead.
Asim Hussain: It gets better than this,
Chris Adams: Yeah. Yeah, my name is Chris Adams. It's a little bit Monday this morning, it seems. I work at the Green Web Foundation, which is a non profit based in the Netherlands, focusing on reaching an entirely fossil free internet by 2030. And I'm also a maintainer of a library called CO2.js, as well as being one of the chairs of the policy working group inside the Green Software Foundation. I'm also the regular host of this podcast specifically. Should we dive into these questions for the mailbag?
All right.
Asim Hussain: Let's go for it.
Chris Adams: All right.
So the first question that came through was one about the SCI. The question is, this SCI is not capturing energy consumed by memory, IO operation, network calls, etc. What is your take on it? This is a question from the World Environment Day thing. This might be a chance to explain what the SCI is, because as I understood it, it does capture
some of that stuff,
Asim Hussain: yeah, my answer on the day would have been like, huh? Yeah, it does. Or something a lot more eloquent than that. But yeah, this is Software Carbon Intensity is a specification being built by the Standards Working Group in the Green Software Foundation. It is almost in ISO. That is our goal for this year is to really go through that process.
Chris Adams: And just to jump in, ISO is the International Standards Organization.
Asim Hussain: Yes, that's the one. Yep. And what it is, let me just very quickly say what it is. It is a method of measuring software carbon intensity, which is a rate. If you listen to a podcast, it'll probably be carbon per minute of the listen. It's a rate rather than a total. Other kind of really in a standout aspects of it are that it's been designed very much by people who build software.
And so it's been designed by people who actually build and measure software to act as a good metric to drive reduction. So make sure that inside it is included aspects so that if you did things like move your compute to a greener region, or you move your compute to time when it's greener, or things like that actually would be recognized in the calculation.
Whereas, for instance, if you use the GHG protocol, oftentimes stuff like that isn't factored in and you can do carbon air computing to the, to the cows come home but it wouldn't really affect your GHG score. That's some of the aspects of the SCIs, very much built that way. Now, what I will say is if you actually look at the SCI equation, it's very simple.
You basically per hour, so it's always what we call per hour, so per minute might be the hour. Or per user, user might be the hour. So per hour, you have to figure out how much energy Is consumed. You have to figure out how much, what we call embodied carbon, so how much hardware is being used and if you're, if it's per minute, then you figure out how much energy consumed per minute.
If it's per minute, you just try and figure out how long is this piece of hardware normally used for and divide it by and obviously you get per minute. Then the other thing you also factor in is thing called I, which is the grid emissions factor. So how clean ditch is your electricity, any factoring or what?
Whatever it is for that period of time with electricity. And the key thing there is that's it, and so therefore, It includes everything. It doesn't exclude memory, or I/O, or network, because it's just energy, hardware, and grid emissions, and so as long as you've got some values for that, for your memory, for your I/O, for other things, you can do it. What I will say to answer, I think maybe, I don't think this was in the spirit of the question, but I think it's clear to it, measuring is hard. It's really hard. Like Chris, you've got co2.js And that does a great job of kind of network, but even then you have like multiple flags if you wanna use it in this mode or this model or this assumption.
Like, I love, I use it all the time these days. What did you say, like, all models are bad, some are useful? Yes, I do think that calculating an SCI score, which includes memory, IO, network calls, all the other factors in software is challenging, and I will acknowledge that, but it's also something that a lot of people are working on, and I think we're working on that with things like the impact engine in the foundation, and Chris, you're working on it with the co2.js.
Arne is working on it from Green Coding with those models. Yeah.
Chris Adams: with GMT, the Green Metrics Tool.
All right,
Asim Hussain: metrics, oh yeah, yep.
Chris Adams: Hopefully that should give plenty to refer to. I'll add a couple of links to what this SCI is to make that a little bit clearer, so for people to understand what that might be for that question. Should we jump on to the next question actually, Asim?
Asim Hussain: Yeah, sure,
Chris Adams: Does the GSF have any real examples measuring the SCI on pipelines of CI/CD? That's a soup of different letters there, but as I understand it, the GSF being the Green Software Foundation, SCI being the Software Carbon Intensity is a way to measure the carbon footprint, and CI/CD being continuous integration, continuous delivery, like automating the process of getting software out for people to use, all
Asim Hussain: mm hmm, yep,
Chris Adams: All right, so now that we've explained what the question meant and unpacked some of those, all those TLAs, three letter algorithms, do you want to have a go at this one? Because I can add a little bit myself with some recent work that we've been doing in my day job.
Asim Hussain: Yeah, so definitely, I'd say there's two things, is that A, a lot of work that goes on is also just behind closed doors as well, and that's one of the things that I find interesting about this space is that sometimes you'll just never hear of it. So, in terms of real examples of measuring SCIs, so there's a project called the SCI Guide, which has a number of case studies inside them, where organizations are really trying to document what they're doing and revealing the numbers.
Revealing numbers is very challenging for a lot of organizations, I can attest to it. You have to go through so many levels of approval to reveal your number. So there's, we've only got a couple of examples of those, but there's definitely tooling that we're building to make this a lot easier. So we're building something called the impact engine framework, which is a framework, which is what CarbonQL is now called the impact engine framework.
So if you've heard me say the word CarbonQL, it's now called the impact engine framework, and it's a tool with a manifest file and you can use it to calculate the emissions. And you can say, I wanna use co2.js I wanna use cloud carbon footprint, I want to use green metrics, and you wanna use whatever.
And it helps you measure an SCI score. And where we're starting to think now is we'd like to get to the point where, there is a GitHub Action, basically, the dream world I have is in six months time, thousands of open source repos all over the world, just drop a configuration file into the root of their repo, add a GitHub Action, and they're measuring an SCI score for their product.
It's been two years now in the making of even the specification. We couldn't have done this two years ago. I feel like so many pieces of the puzzle are now coming into place where people can really, very easily, with an hour's worth of work, measure the emissions of a piece of software, and that's where, so yeah, the CI/CD thing is coming, I would say, in six months time, at least from our side.
And it sounds like you've already got some work anyway from the green coding, green coding landscape,
Chris Adams: yeah,
I actually didn't know about the impact engine. That's, that's new to me as well,
Asim Hussain: yeah.
Chris Adams: The thing that we've been using, so with my day job, one thing we've been doing with a open source project called Wagtail, we've been working with some of the core developers there, and on the Google Summer of Code, a couple of early career technologists who have basically been, who I've been mentoring to introduce some of Essentially like green coding features into Wagtail itself.
Now, the last release of Wagtail came out, uh, in beginning of August, actually the end of July. Now, Wagtail is a content management system, a bit like WordPress, but unlike WordPress, it's written in Python and it's actually written on top of a, a, a software library called. Django, Which is what our own platform uses. Flagtel was used by a number of websites with NASA. If you visit the NHS website, you're using a Wagtail website. There's a number of ones that it's in using. And what we've been doing is we've actually We got chatting to the folks at Green Coding Berlin, which is pretty self explanatory, what they do, they do green coding, and they live in Berlin, we got chatting with them about this, because we were trying to understand, okay, if we're going to make some changes, are we going to be able to understand the environmental impact of, are we making progress? They also have a very literally named tool called The Green Metrics Tool. Can you guess what the Green Metrics Tool does, Asim?
Asim Hussain: I don't know, man, it's hard with these, these terms. Does it, does it generate green metrics in a tool?
Chris Adams: Oh, dude, it's so German. I live in Germany. This is like, to see
Asim Hussain: What's it say in German? Say it in German.
Chris Adams: no, I should, we don't actually have,
it's, it's, you know, the Green Metrics Tool is what it is in
Asim Hussain: Okay, all right,
Chris Adams: So, I think GMT is what we end up referring to it,
Asim Hussain: Oh, that's quite funny. Greenwich Mean Time. Greenwich Mean Time as well, yeah, yeah.
Chris Adams: We've been using that and. The thing that I think is quite interesting about what, uh, the folks at Green Coding Berlin have been doing is they've realized that, okay, there's a bunch of open source tools, op open source software in the world. So they've been basically forking a bunch of open source tools running this.
And then whenever there's a kind of CI run, they've been measuring some of this and, uh, they've actually got a project called Eco CI, which basically is like a GitHub action that fig, that measures the power used when you do a kind of, run as it were, a CI run to, to test something. So they've got some of these figures here and the thing that they've been doing, which we found quite useful as well, is they've been using a tool which allows us to run through common scenarios.
Like I go to a website, I browse through a few places. I search for something, I submit a form, I upload, something like that. We've got a set of journeys that we follow and we're using those as the kind of sample ones to as our kind of baseline to see. Is the work that myself and Aman, the student I've been working with the most, is the work that we've been doing there, has it been helping or has it been not helping? Because the particular piece of work that we've done recently is introduce a support for a new image file format. Called A V I F instead of just using like JPEGs and massively reduces the typically halves the size of any, any of the images that you do use. But there is a bit of a spike in energy usage compared to what you would normally would use both on the server or on the browser.
So we're now actually trying to run this in various scenarios to see is this actually an improvement on this? Because even though it results in a nicer experience, we're trying to make sure that we're going in the right direction. So that's one of the things we have. There's a couple of things we have going on as well.
But that's the kind of most concrete example that I might refer to. And there's a couple of links to both the output from this, but also the open source projects, because you can mess around with some of this stuff. Pretty much right after this podcast, if you really decided.
Asim Hussain: So this is the stuff that is using direct measurement. So you're forking it, running it on like a special rig that is like measuring it. Yeah, I think that's, it's interesting. I feel like this is like something that's been in discussion with the SCI as well, but we never landed on some good terminology for it.
I think we use measurement versus calculation. And we try to say the word measurement like direct, like what's happening in green coding, like direct measurement uh, something from counters or from a power meter or something like that, whereas we use a calculation is when you are just taking some sort of, we, we call it now the impact observation.
You take some observations about the system and you're passing into a model and getting an estimate of emissions. So I think we, I think the language here has gotta get a little bit more specific. I remember on the calls we were even asking, academics, whether there was like specific language around this and it wasn't.
Maybe the, maybe one of the listeners can say, actually ask him what you're describing is the word for calculation is X and the word for measurement is Y. This is, this is where we're getting to, and I think this is where the conversation is in this kind of generally metrics area. One of the reasons I'm exploring modeling is actually for a very interesting use case, which is once you model, you can simulate.
So once you've got a model, you can then tweak the model and say things like, so one of the things we're exploring is like, what if you were to change some aspects of the system, you've got a model, so can you then model that change, and then estimate the emissions reductions. And that's where like modeling has an advantage or modeling has a real disadvantage In the fact that it's a model and you're not really going to get a great actual measure.
So I'm not too sure, we don't have the answers. I just think this is an interesting question. It's like measurement versus calculation and I haven't fully formed my thoughts on this yet as well. But I think it's going to be an active bit of discussion for a while. Maybe it has been an active bit of discussion.
Maybe I'm just really late to the conversation.
Chris Adams: I'm not sure myself, to be honest, but we'll need to
see. The thing I think should be relevant, so when we were using this to figure out whether we're making things worse or better inside Wagtail, I asked Arne about some of this, okay, how are you actually coming up with these numbers? And they basically do things.
Yes, they have a rig, they've got like a bunch of machines that they have where they're reading the data directly from that. But they've also been doing a bunch of work with some of the underlying data that's published by various chip manufacturers. Something called the Spec Co. The
Asim Hussain: Best spec power? Yeah,
Chris Adams: yeah, And the, I've shared a link which basically goes into stultifying amounts of detail about what they do. They've talked, spoken about, okay, this is the tool that's used by green pixie, by cloud, carbon footprint, by TEEDS, like a French advertising company who've been trying to figure this stuff out, and they've. Basically share their modeling of it, which could presumably be consumed by Kepler as well. So they're trying to build these models because they don't have access to the underlying data. And this is something we spoke about in the last episode and the previous episode before that, about why it's a real challenge to get these numbers from especially large hyperscaler providers who. Basically, we'd really like to have much more control over the language. And in many cases, they give honestly quite good reasons for saying, look, share these figures. They are citing reasons like commercial confidentiality or an attack vector. This is why I'm quite excited about the Realtime Carbon project, because it's a chance to finally
Asim Hussain: the values.
Chris Adams: of that.
So you can actually have some meaningful numbers. So you can say, are we making it better? Or are we making it worse? Because even now, in 2023. Getting these figures is a real challenge if you're not running your own hardware.
And I guess, I assume, now that you're working at a company that makes the hardware, or makes much more of the hardware, that's a different change for you now, you see more of it from the other side, right?
Asim Hussain: Yeah, I do get and I speak to a lot of people now. And in fact, actually, one of the things that maybe would be useful to have a deep dive on spec power, if you want to have an episode, I can definitely bring some people is one of the people in my team, she's been spending a lot of time really getting into the weeds.
And it's fascinating working with people who build CPUs their entire life, because it's a different like, You think, Chris, we just write some variables in a Visual Studio code every now and again and claim to understand technology. Once you really get under the seat, there's a lot going on. That we are so abstracted away from and like one of the conversations happens all the time inside Intel is like how do we close that gap between what developers are doing versus what the hardware can do to be more efficient.
And I think there's the, there just sounds like there is just this chasm of opportunity here, which we're just not taking advantage of. A lot of the stuff that's happening on the intel side of the equation is just making people optimize their code. That just, but like using standard kind of optimizations that have been available for ages and a lot, there's a lot of just understanding that I don't even understand how a CPU works sometimes, like the energy curves just do not make any, any sense to me.
I'm not going to go into depth as to my lack of knowledge of what CPU is, but I could definitely bring people in who are much more knowledgeable than me. And then maybe let's have a deep dive into that. I'd be fascinating conversation, like really get into a chip.
Chris Adams: Yeah, because the thing that we've, the thing we're seeing from the outside, or the thing I've noticed from the outside, and I've seen other people also referring to, is the fact that- do you know how we had this thing back a few years ago where engines had like defeat devices where if they're tested, they're gonna work a certain way and they really are. It turns out that you often see some patterns a bit like that whenever you have benchmarks. 'cause if you design for a benchmark, you might not, it might not be designed. You, you could, there are scenarios where a chip will work a certain way that will make it look really good in the benchmark. Uh, and that might not necessarily be how it actually works in the. In the real world basically. You've got that happening a lot, lots of cases. I would really love to deep dive into that because this is the thing we struggle with and it's weird that say most chips are most efficient, like at two thirds capacity between two thirds and three quarters, right? Rather than, so you might think like you got, if I turn it all the way down, that will turn all the power down. No it doesn't work that.
Asim Hussain: It doesn't. Yeah.
Chris Adams: And there's all these other incentives about where you move computing jobs as a result, which has this kind of knock on effect.
Alright, we've.
Asim Hussain: There's actually really interesting work around like when we talk about moving compute around different parts of the world, there's actually a really great project being open source project run through Marlow Weston, who's one of my colleagues at Intel, and she's also one of the chairs of the CNCF environmental tag and I'm going to get the name of our open source project wrong. I think it's Kubernetes Power Mode. And what it does is it does like load shifting across cores on the same CPU. So normally when you, like, you want to max out one core before allocating work to the other cores. That's the most efficient way to go up the curve.
But most like allocators will just allocate them across all the cores on average. And so she's built this kind of, uh, Kubernetes, uh, scheduler, which basically will max that one core at a time. So you get to the top.
Chris Adams: Wow, I didn't know that was possible. That's a bit like how cars, so certain cars would be, if you've got a car with maybe a V8 inside it, there are some cars which will basically just run on four of the eight engines, eight cylinders firing all eight for fuel efficiency. That sounds like the kind of cloudy equivalent to that idea.
Asim Hussain: But there's also, but she's, she's actually got a second Kubernetes project I'll get the link to, which allows you, to change the clock frequency of your chip at the application level, so with the intention of; if you can change people overclocking, you can actually underclock, and underclock actually does this amazing thing where you get much more efficient from an energy perspective because everybody's looking at like reporting what is the like peak level efficiency but if you can just say look i'm willing to run at 20 less clock speed you actually gain more than 20 energy efficiency improvements but you lose that on the performance.
So if you can dynamically change the clock frequency, which happens a lot on like laptops and mobile devices, it does not happen on the cloud space. It has lots of negative consequences as well. Lots, yeah. You really can't just do it without knowing like how an entire stack works top to bottom. It's a very advanced piece of thing, but if you can take advantage of that as additional efficiencies again, reducing that chasm between what we developers think we know about tech and the hardware versus what hardware actually does is I think one of the frontiers of this space.
Chris Adams: This was actually something Arne explained to me, he was looking at why some of the figures that say, we spoke about a project called Scaphandre last week, he says that one of the reasons that, one of the things that's difficult about this is that, yeah, like you said, the clock speed can go up and down, and he, the kind of mental model that I ended, left the conversation with was a bit like, revolutions per minute in an engine, so you can have it red lining to go load really, really fast.
But if you scale it right back down, then you can be somewhat more efficient, but there's going to be impacts. I didn't realise that you had that kind of control with a software level itself. Actually, you could deliberately- I thought you could only just ask the CPU for work to be done rather than say, can you do a bit, cus that's that's not like nicing something. That's a different level of
Asim Hussain: That's a whole different level. Nicing is probably... No, it's not like nicing something. It's a very different level of hardware control. Yeah.
Chris Adams: All right. Wow, we went really deep. Not expected enough. Okay. Okay. Bye. Okay, so hopefully that should help the question that asked,
are there
Asim Hussain: even the question? What was even the question?
Chris Adams: there examples of measuring the SCI in pipelines?
Asim Hussain: We went off!
Chris Adams: Yes, there are examples of it. There's lots in the open. The work from Green Coding Berlin is probably some of the stuff that's really in the open. But there's also work done behind various corporate firewalls that you might not be able to see, or you might probably can't see unless you employ all kinds of industrial espionage, which I suspect you're probably not going to do that if you are good at that. Anyway, okay, let's move on to the next question, it seems because we're burning through our time.
Next question was about the carbon efficiency of GPUs. This seemed to be a question of basically saying what's the carbon efficiency or otherwise of GPUs when they're used for like owner respect search and stuff like this, and is this good for the environment? This is the question that I got, and I assume this was a response to people talking about the fact that with this new world of generative AI and LLMs, you use lots and lots of specialized chips, often, which look like GPUs or sound like GPUs. Do you want to have a quick go at this assume, and then I could probably
bounce on some of this, because I just, yeah.
Asim Hussain: Let me say two things. A, If you're using the generalized CPU, which is specifically for generalized and for anything else, so it will be more efficient on an energy basis. I would say the point though is when you start using GPUs and you start using specialized hardware, each of them has an idle power amount.
And so if you've got a GPU and you've got a whole series of them, or all this is the specialized hardware and you're not using them, that's actually bad. And so it's very important when you have this specialized hardware, like you're thinking through and you're thinking, I've got it, I'm using it. That's why I've got it.
Obviously, if you're in the cloud, it's a different equation, right? Maybe not, actually, if you can just order a GPU and not really use it. And the other thing I would say is, is, and I've seen this conversation go a little bit wonky as well is when oftentimes the total power of a system increases. 'cause a GPU consumes more power, and then people just say, oh, it's just, it's less efficient, it's consuming more power without factoring in that like a job will run faster and therefore the total energy will be less.
If that makes sense. I've seen conversations get into confusing territory and people have confused energy and power. 'cause power is like just the Watts per second, whereas the total energy, so if you're using so that, that's another way
Chris Adams: You're
Asim Hussain: about carbon efficient. Yeah. Was,
Chris Adams: being that you might have a GPU, a graphics processing unit, which is extremely energy intensive, but it runs a job for a short period of time and therefore it could be turned off or could be scaled back down. Right? That's the thinking. That's what you're saying, right?
Asim Hussain: I dunno if they can be turned off, but I think they're always on, aren't they? I don't know. Actually. I have no idea. But yeah. Are the ones that turn off?
Chris Adams: You can see there is there, there's a definite, uh, impact between something running a hundred percent and running and when it's idling, there is a change.
But I'll be honest, I'm outta my depth when it comes to figuring out how many compute, how many people who run data centers switch them off on a regular basis.
I suspect the number is very low.
So,
Asim Hussain: close to zero.
Chris Adams: yeah,
I was actually going to answer this differently.
Asim Hussain: Oh, go on then. Yeah.
Chris Adams: say that if you're asking, if you want to talk about the carbon efficiency of GPUs compared to like CPUs or something like that, it's worth understanding that the emissions will come from two places when you're thinking about this.
There's emissions created from making the actual computer, and there's emissions from running the computer. And when you make something which is specialized for the GPU, for example, that's going to be pretty energy intensive. And in many cases, you have a bit of a trade off, right, where if you, if you basically had a bunch of CPUs compared to GPUs, if the GPUs are more energy intensive to make, then if you don't use the machines very much, then you don't have much usage to amortize the kind of cost.
So that, so in that case, GPUs are going to be pretty inefficient, they're going to be pretty carbon inefficient. But for the most part, because these things are so incredibly expensive, they tend to get used a lot or there is an incentive to use them as much as possible. And even if you're not doing them, to make them available for free, uh, for people to use these or at least try, try and grow a market.
And that's what you see right now with, um, things like, uh, various tools like chat GPT and stuff like that, which lots of us are not paying for. The use of that results to a massive amount because you want to re receive a to achieve a certain amount of utilization, so you can actually get any kind of return on this.
The thing that I would actually draw your attention to or thing that might be worth looking at is recently we had the conference Hot Carbon, and there was a really cool paper which was specifically called, which addressed this, the title of the paper was called Reducing the Carbon Impact of Generative AI Inference. There's a number of people who are named on this. So Andrew A. Chien from University of Chicago and Argonne National Laboratory. Hai Nguyen, Varsha Rao, Tristan Sharma, Rajini Wijayawardana from the University of Chicago, and Liuzixuan Lin, I think, right? This was a really interesting talk. I think because it was basically looking at the environmental impact of tools like, say, AI, and saying, okay, we've got this whole kind of trend of employing LLMs, and large language models, and generative AI in searches and things like that.
What does the impact look like? And they basically looked at, say, the usage figures that were published for ChatGPT in March 2023 and that was like 1. 6 billion, like users. And then based on that, they, they they modelled the likely inference cost, which is the cost from using it, and the training cost.
And the thing, there was a few kind of takeaways. First of all, we often talk about the training cost as the big thing to be aware of. And they said no, like the training was 10 times the impact. And they said if you were to scale this up to say, Google's usage, then even if you had a training cost of about, that's going to have a ginormous impact basically. So we should be really thinking about the inference part, and in this case here, having something like a dedicated fast machine that does the inference, compared to a bunch of CPUs, for example, is really cool for a bunch of other reasons.
Asim Hussain: Yeah, and I just want to say, I think two things with the increased adoption, interest, usefulness of AI. Influence is going to go through the roof, as you said, it's on and the only place it's going to go is higher. The interest is going to go as higher as the years go on. As I've said before, nobody invests billions of dollars into AI if there's not a growth sector.
People aren't going to use it and more people are going to use it. That's inference. That's why inference is very interesting. That's going really high. I just want to say, I just completely forgot about the Hot Carbon Conference this year. I watched every single talk in the Hot Carbon Conference last year.
And let's put it in the show notes because I think last year's program was amazing. I watched every single video. I made copious notes on all of the, all of the talks, and I'm, I'm looking forward to going through it again this year and doing what you did. Sales and just listening to all of 'em.
Chris Adams: Yeah dude we had some of the people, we've had the speakers from the previous talks because there've been so many really good ones. The thing that I really liked, I just wanna come back to this one because I think there's some really nice things that came from this. This talk in particular in this paper. One of the key, key things was, is basically saying, let's assume you're gonna have this massive increase in usage. And I think the comparison was, they said if you were to scale the usage of chat GPT up to the kind of modeled usage, In, in this paper for say
Asim Hussain: Oh,
Chris Adams: mainstream search engine, a 55 times increase in use. If you were to scale it up that way, you might think, oh, crapes, that's 55 times usage. Assuming this is like in 2030, and then ev this, they basically tried to project this forward into 2030 and say, well, okay, what would the look, would it be that in 2030 we would've 55 times a carbon footprint if you did this? They basically projected, they took some trends and extrapolated them forwards. One of them was that you're probably going to see an increase in energy inefficiency over time because we have seen in moore's
Asim Hussain: sorry, you said energy inefficiency, did
Chris Adams: So energy efficiency. So they basically said, let's assume between now and 2030, you see a 10 times improvement inference, and that's based on what we've seen so far in terms of things keeping, keeping getting more efficient. Let's look at the carbon intensity of the grid will also be decarbonizing over time and they took some from current trends and what's actually especially been coming in with changes in policy and they basically said with these numbers is it possible to do something about these figures and what would the figures be if you were looking at this in 2030 in the next six and a half years and they basically modeled some of this and they modeled- they, they did this as a way to figure out the actual savings possible by using things like carbon aware programming, and one of the key things they said was that because inference isn't super latency sensitive, because of the actual on the machine in the actual chips in some distance, say machine doing a bunch of inference, then piping the results to you. It's not so latency sensitive and that means that you can quite easily run this in lots and lots of greener regions, even if you're accessing it from a place where the energy is not so green, let's say. Using this versus what we have right now. They, they we're probably not gonna have a massive increase with, I think the figures that I saw
Asim Hussain: Oh, so they,
Chris Adams: versus, yeah.
they basically said, based on this, if we were to employ, let's say we, let's assume you're gonna have machines becoming more efficient anyway, and you scale up this much usage, if you were able to carefully run the inference and serve the requests
Asim Hussain: Oh.
Chris Adams: the greenest regions.
Asim Hussain: But that's the assumption. The assumption is that you have to actually be green, do green software to decarbonize a software. If you actively, so it sounds like if we did everything we're asking you to do, we'll be flat. Do they have a number for what if people didn't do?
Chris Adams: Yeah they basically said, assuming if you didn't have any energy efficiency improvements, they said 55 times load will be 55 times a footprint. They said if assuming you have the efficiency improvements increasing at the same rate as they have been, you're looking at maybe With an uplift of 55 times the usage, you'd probably be looking at 2.6, two and a half times the
energy usage, I mean, of the emissions from the grid, right? But they said, if you were to actually use the learn,
Asim Hussain: Carbon
Chris Adams: programming like this, they brought it down to like, the ideal scenario would be you're looking at 1.2, which
Asim Hussain: But that,
Chris Adams: kind of mind blowing...
Asim Hussain: well, it's mind blowing, but I think it shows how important the work that we're talking about is. It's like, actually, it's one of the really great talks from last year's Hot Carbon, which I loved, which was, I've forgotten, I've got to apologize. I'm not going to remember which one it was.
But it was talking about how projecting forward kind of compute growth and how green software was a way of being able to handle the additional usage and load of the cloud without actually having to build more servers, because fundamentally we are constrained at the rate with which we can actually increase the cloud, but the growth is growing significantly as well, so like being more efficient actually allows you to deal with growth. You have to be green, you have to use green software if you want a realistic chance of generative AI being as ubiquitous as you want it to be.
Chris Adams: I mean, the other thing is, you don't have to assume that they have to be there, like, yeah, you don't, maybe, like, the option is, don't, you just don't need to buy all this equipment in the first place. These will never be a replacement for actually having better data.
Asim Hussain: What if they're just humans in a building that's answering your question? Is that more efficient? There was a Gartner thing I saw recently which is that the total amount of energy used by AI by 2025, so Gartner report, will be higher than the total amount of energy used by the entire human workforce in the world.
Chris Adams: I, I, I, I would, I don't know enough about that. And I feel a little bit worried about referring to that. But the point I was going to get to was the fact that you're seeing examples where Actually, just having good domain knowledge, it turns out to be much, much more effective than having loads and loads of compute.
And the good example that I've linked to here is actually, there's a company called Lesan, they're based in Berlin, and they do machine learning specifically for Ethiopian languages. And they outperform Google Translate, they outperform some of the large providers, because they've just got access to the actual benchmark data sets from the first place. This is the thing, having quite high quality data is another way to reduce the amount of compute used. And this comes up
Asim Hussain: true. Yeah, very good point. yeah,
Chris Adams: and this is also when you bear in mind that even just the whole tokenization that you have when you're, it's based around English language and so even another language is gonna have, we're gonna gonna need more tokens for the same amount of sentences. So there's a whole bunch of issues there that we might refer to.
Alright, so we, we dived quite far into an efficiency of GPUs and we might think about that. I think we've got time for maybe one more question left before we have to wrap up Mr. Hussain.
Asim Hussain: Okay. You pick it.
Chris Adams: Okay, so this one is, this is a question about water usage. Can the cooling water for data centers be reused?
And this is a question because people...
Yeah, actually, I think one of the worries is that people actually... In many cases it just gets pumped back into rivers when the water is that much hotter, you're basically just cooking the fish, which is not...
Asim Hussain: Sorry.
Chris Adams: not very helpful.
I
Asim Hussain: it depends if you like eating, I suppose it depends if you like eating fish,
Chris Adams: don't think it's good. I don't think the fish enjoy this, right, but basically there is- that's one of the issues, but I think this is more actually a case of this is speaking to the fact that in many cases, 1 of the big things that's come up is basically people talking about the water usage with compute, and in particular data centers where, which are very heavy on, uh, generative AI and things like that. And there's a really good example that we might refer to that I learned about, which is Google and some of their data centers in Chile over the last few years. There was a whole thing where you. So in Europe, for example, where there's lots and lots of water, you don't necessarily, or there's parts of North and Western Europe where if they're cold, and they already have lots of water around them and lots of rainfall, then it's not so much of an issue.
But if you were to put a data center where there's loads of drought that uses lots and lots of water, the examples, there's a company called Algorithm, organization called Algorithm, which we spoke about some of this, because you see protests against data centers. One of the key things was You find some data centers using something in the region of 169 liters per second. Now, if you run that in a place which has drought, maybe not the most equitable use of a scarce resource, especially for the people who rely on that water to live and survive. There are other examples where large companies have come in where they've ended up using significant amounts of water. The thing that was interesting about Chile was that Google wanted to deliver a deployed data center here. They had a bunch of pushback, but then they ended up choosing to use much, much less water intensive technology as a result, like I think it's adiabatic cooling, which is essentially a kind of closed loop system, which doesn't rely on evaporating water than getting rid of the water as a way to cool things down. This is one thing that came up and I've I have added a couple of links to both Algorithm Watch talking about this, as well as the actual organisation, the activists in Chile, talking about ok, we had a victory for this. The fact that, yeah, they are issues around it, but it's also a case of companies, they can make these choices, but a lot of the time, they might not choose to, because it's a little bit more expensive and here you feel like if companies could be making a huge amount of money, and Google spent 60 billion on share by buying its own shares last year, they're going to have fairly efficient, less water intensive cooling in a place where there's that's suffering from drought. This seems a fair thing, like these things we should be asking for and should be setting as a norm. There are other organizations doing this too.
Asim Hussain: What do you think, one of the things, I've got nothing to back up, one of the things that was hinted to me the other day, I think it was Sarah Bergman who might have mentioned on Twitter, that there might be situations where it's mutually the opposite. Being more carbon efficient might actually make you more water intensive.
Like for instance, doing things that reduce carbon emissions might require more water consumption, and which is why I think it's exciting that we're actually all starting to have this conversation right now, because I think we're so focused on carbon, and we're optimizing for carbon, but actually, the landscape is much more complicated.
It's much more of a surface where you're trying to minimize the environmental impacts of your choices. And you might have to make trade offs versus one versus the other. If there's a water scarcity right now, you might have to increase your carbon emissions. I'm excited that this is where the conversation is evolving to.
Thank you. Because once we add water to the mix, we can add other things.
Chris Adams: You see a trade off for sure, but in also, lots of these, ultimately, it comes down to capital expenditure.
Lots of the
Asim Hussain: it can be an And
Chris Adams: very, like, yeah,
like,
Asim Hussain: an and. Yeah, yeah.
Chris Adams: you are seeing this, but it's also worth bearing in mind that when you're looking at this, impact comes from the energy generation in the first place, because let's say you're going to burn a bunch of coal to heat up a bunch of water to turn to, to generate some electric is a huge amount of water being used there.
In fact, freshwater usage in energy generation, I believe it is actually the number one source of water usage in America. So we, when we talk about this, it's also worth thinking about the entire supply chain. Yes, there are absolute things you can do at the data center level. Also, if you look through the supply chain, there's also other areas, but typically with data centers, it tends to be very localized. So there may be water being used, but if it's water being used in a place where that people are depending on for drinking water in the same town.
You can understand why people are a bit miffed, basically.
Asim Hussain: it's like, we don't really think of data centers like coal power plants, but like, it's almost just the same. Like we treat, we treat, we treat, we treat them as very different. But at the end of the day, like water is a, is in this, in this case, could be a pollutant.
Chris Adams: Yeah.
Asim Hussain: If you're pumping hot water out, I don't know, I do not know enough.
Please don't quote me. I don't know exactly what happens here. I do not think that data centers are like, maybe they are like squirting like hot streams of water into rivers or something like that. But I'm just pointing out that you often feel like some things are like abstracted away from a mission so much you don't really associate it with the entity.
But like with a coal power plant, we just so associate it with emissions that we know what to think about it, how to think about it. But like a data center in a way is it generates emissions. I'm sorry if it is. Putting like hot water into rivers and streams. Isn't that a pollutant?
Chris Adams: Well, yeah. THere's all kinds of pollutants that you have. There's noise pollution as well.
There's very, that you might need to take into account when someone's citing big pieces of infrastructure because this is industrial infrastructure.
That's the
Asim Hussain: is. Yeah.
Chris Adams: Like there are cases of the. people having a really hard time with just the wiring and the noise pollution from data centers crypto mining rigs
Asim Hussain: really, you can you hear, if you live now, you'd be able to hear whirring
Chris Adams: I'll share a link to an example from um there there's there's an interesting case with amazon uh specifically where there's a there's a bunch of people who are basically complaining about the noise pollution um in i believe it's I think it might be West Virginia,
who are,
Asim Hussain: Yeah. There's semi
Chris Adams: where they basically hear this because it's loud enough, but you also see this with cryptocurrency mining in New York State, there's been lots of cases where you have typically the really quiet, serene places, where the calm has basically been punctured by the incessant whirring of,
Asim Hussain: like
Chris Adams: of all these things, yeah, exactly, so there's various dimensions that you would need to take into account that go beyond just thinking about carbon and carbon tunnel vision, but let's be honest dude, like, Most of the time, organizations struggle with just thinking about carbon as well as cash, right? So it's, it
Asim Hussain: Let's add water and noise to it though, Chris. Let's give, let's give him everything. Yeah.
Chris Adams: and the, what I'll do, I'll add another link, because there's some really fast, fantastic work by Sasha Luccioni, who's the climate lead at Hugging face. She wrote a really good piece in Ars Technica, talking about all the various things you need to take into account with the environmental and social impacts technology and specifically, um, AI. It's a really nice way in. And, oh, I should actually share, um, my organization brought, published a new thing, uh, this week, A new issue of Branch has come out and it's got a bunch of stuff talking about this from a, from a Tamara Kneese. She wrote some, she wrote about some of this, but also Dr. Theodora Dryer, she, she wrote a piece about's, also an expert in. We'll show a link to that 'cause that that would be fun for some, for some people as well.
Oh, blindly. We've gone way over actually Asim.
Asim Hussain: That's good. That's good. Great episode.
Chris Adams: We answered those questions, or at least we've peppered this, uh, these show notes with huge amounts of links to people who might wanna learn more about this and hopefully we've get add added some tantalizing hints. Asim, I think we're actually at our time, we've got through four questions this time around. I think there are some more, but in the meantime, I think I'm gonna have to say, Thank you for coming on and wandering through this with me. Yeah, this was fun, man.
Asim Hussain: Yeah. It's good to see you guys. I love these, I love these mailbag episodes. Let's do more of them.
Chris Adams: Yes, I want to ask you a bit more about the Impact Engine next time as well, because I didn't know about that.
Asim Hussain: Give us, give us a month and I'll, and I'll, and I'll be able to get into a lot more detail about it with you. Yeah,
Chris Adams: Okay, cool. Also, if anyone who's listened to this is curious and has questions of their own, please feel free to at us in various places or even come to the new discussions. The new Green Software Discussions website. I might ask you to point to this because otherwise I'm going to podcast.greensoftware.foundation
Asim Hussain: We'll put it in.
Chris Adams: address that we normally use. Is it visible? Is there
Asim Hussain: do you know we should create a short link? We, we should create a short link, but there isn't, if you actually go to our GitHub organization, there's just a tab called discussions. But you're right. We'll, we'll put it on our website and we'll make sure it's more prominent in the future here.
Chris Adams: Okay, in the meantime, go to https://podcast.greensoftware.foundation. Most recent discussions where you can ask some questions and then we may if we can fit them in the list, we'll add all of them so we can add other things coming through.
All right, that was us. Lovely seeing you again. Hope the mushrooms are well, and yeah, see you on the flip side, okay?
Asim Hussain: See you then, buddy. Bye.
Chris Adams: Hey everyone, thanks for listening. Just a reminder to follow Environment Variables on Apple Podcasts, Spotify, Google Podcasts, or wherever you get your podcasts. And please, do leave a rating and review if you like what we're doing. It helps other people discover the show, and of course, we'd love to have more listeners. To find out more about the Green Software Foundation, please visit greensoftware.foundation. That's greensoftware.foundation in any browser. Thanks again, and see you in the next episode!

  continue reading

87 επεισόδια

Artwork
iconΜοίρασέ το
 
Manage episode 374394948 series 3336430
Το περιεχόμενο παρέχεται από το Asim Hussain and Green Software Foundation. Όλο το περιεχόμενο podcast, συμπεριλαμβανομένων των επεισοδίων, των γραφικών και των περιγραφών podcast, μεταφορτώνεται και παρέχεται απευθείας από τον Asim Hussain and Green Software Foundation ή τον συνεργάτη της πλατφόρμας podcast. Εάν πιστεύετε ότι κάποιος χρησιμοποιεί το έργο σας που προστατεύεται από πνευματικά δικαιώματα χωρίς την άδειά σας, μπορείτε να ακολουθήσετε τη διαδικασία που περιγράφεται εδώ https://el.player.fm/legal.
Host Chris Adams is joined by executive director of the Green Software Foundation, Asim Hussain as they dive into another mailbag session, bringing you the unanswered questions from the recent live virtual event on World Environment Day that was hosted by the Green Software Foundation on June 5 2023. Asim and Chris start with a discussion on the complexities of capturing energy consumed by memory, I/O operations, and network calls in the SCI. They explore real examples of measuring SCI on pipelines of CI/CD, showcasing projects like Green Metrics Tool and the Google Summer of Code Wagtail project. The conversation shifts to the carbon efficiency of GPUs and their environmental impact, touching on the tech industry's increasing hardware demands. They also address the potential for reusing cooling water from data centers, considering various cooling designs and their impact on water consumption.

Learn more about our people:

Find out more about the GSF:

Questions:
  • SCI is not capturing energy consumed by Memory , I/O operation, network calls etc. So what is your take on it? [3:27]
  • Does the GSF have any real examples of measuring SCI on pipelines of CI/CD? [7:15]
  • What is the carbon efficiency (or otherwise) of GPUs, say, onerous compute vector search? Is that good for the environment? [23:40]
  • Can the cooling water for data centers be reused? [36:28]

Resources:

If you enjoyed this episode then please either:


TRANSCRIPT BELOW:
Asim Hussain:
We couldn't have done this two years ago. I feel like so many pieces of the puzzle are now coming into place, where people can really very easily, with an hour's worth of work, measure the emissions of a piece of software. Basically, the dream world I have is in six months time, thousands of open source repos all over the world just drop a configuration file into the root of their repo, add a GitHub action, and they're measuring an SCI score for their product.
Chris Adams: Hello, and welcome to Environment Variables, brought to you by the Green Software Foundation. In each episode, we discuss the latest news and events surrounding green software. On our show, you can expect candid conversations with top experts in their field who have a passion for how to reduce the greenhouse gas emissions of software. I'm your host, Chris Adams.
Hello, and welcome to a special Mailbag episode of Environment Variables. This is our second installment of the format, where we bring you some of the questions that came up during the recent virtual event hosted by the Green Software Foundation on World Environment Day back in June. If you missed our first episode from this mailbag format, feel free to jump back when you'll see some of the other questions that came up and some of our eloquent and possibly not quite so eloquent answers as we ran through that. Today, we're going to run through a few more questions. And as ever, I'm joined by Asim Hussain, Executive Director of the Green Software Foundation.
Hi, Asim!
Asim Hussain: Hi Chris, how are you doing?
Chris Adams: I'm not too bad. A bit grey outside over here in Berlin, but otherwise not too bad really. Okay, before we dive into this, the questions we'll run through. If you're new to environment variables, every time we record one of these, we show extensive show notes with all the links to the papers and the sources and the things that we do have.
So if any of this has piqued your interest, there will be a link that you can jump into to basically continue your nerding out about this particular subject. And I think that's pretty much it. But before that, actually, maybe we should introduce ourselves, actually. Asim, I've introduced you as the executive director, but I suspect you might want to say a bit more about the Green Software Foundation, what else you do when you're not working at the GSF?
Asim Hussain: Thanks. Yes, I'm the Executive Director of the Green Software Foundation. I'm also the Chairperson of the Green Software Foundation, so I hold both roles right now. Yeah, I've basically been thinking about software and sustainability as Chris for quite a few years. Outside of the GSF, I'm also the Director of Green Software at Intel, where I try and work through an Intel strategy regarding, you know, greening of software and helping there.
Because, you know, the only people who buy stuff from Intel are people who run software.
Chris Adams: Thank you very much for that. We'll have this and better revelations and more insightful revelations coming ahead.
Asim Hussain: It gets better than this,
Chris Adams: Yeah. Yeah, my name is Chris Adams. It's a little bit Monday this morning, it seems. I work at the Green Web Foundation, which is a non profit based in the Netherlands, focusing on reaching an entirely fossil free internet by 2030. And I'm also a maintainer of a library called CO2.js, as well as being one of the chairs of the policy working group inside the Green Software Foundation. I'm also the regular host of this podcast specifically. Should we dive into these questions for the mailbag?
All right.
Asim Hussain: Let's go for it.
Chris Adams: All right.
So the first question that came through was one about the SCI. The question is, this SCI is not capturing energy consumed by memory, IO operation, network calls, etc. What is your take on it? This is a question from the World Environment Day thing. This might be a chance to explain what the SCI is, because as I understood it, it does capture
some of that stuff,
Asim Hussain: yeah, my answer on the day would have been like, huh? Yeah, it does. Or something a lot more eloquent than that. But yeah, this is Software Carbon Intensity is a specification being built by the Standards Working Group in the Green Software Foundation. It is almost in ISO. That is our goal for this year is to really go through that process.
Chris Adams: And just to jump in, ISO is the International Standards Organization.
Asim Hussain: Yes, that's the one. Yep. And what it is, let me just very quickly say what it is. It is a method of measuring software carbon intensity, which is a rate. If you listen to a podcast, it'll probably be carbon per minute of the listen. It's a rate rather than a total. Other kind of really in a standout aspects of it are that it's been designed very much by people who build software.
And so it's been designed by people who actually build and measure software to act as a good metric to drive reduction. So make sure that inside it is included aspects so that if you did things like move your compute to a greener region, or you move your compute to time when it's greener, or things like that actually would be recognized in the calculation.
Whereas, for instance, if you use the GHG protocol, oftentimes stuff like that isn't factored in and you can do carbon air computing to the, to the cows come home but it wouldn't really affect your GHG score. That's some of the aspects of the SCIs, very much built that way. Now, what I will say is if you actually look at the SCI equation, it's very simple.
You basically per hour, so it's always what we call per hour, so per minute might be the hour. Or per user, user might be the hour. So per hour, you have to figure out how much energy Is consumed. You have to figure out how much, what we call embodied carbon, so how much hardware is being used and if you're, if it's per minute, then you figure out how much energy consumed per minute.
If it's per minute, you just try and figure out how long is this piece of hardware normally used for and divide it by and obviously you get per minute. Then the other thing you also factor in is thing called I, which is the grid emissions factor. So how clean ditch is your electricity, any factoring or what?
Whatever it is for that period of time with electricity. And the key thing there is that's it, and so therefore, It includes everything. It doesn't exclude memory, or I/O, or network, because it's just energy, hardware, and grid emissions, and so as long as you've got some values for that, for your memory, for your I/O, for other things, you can do it. What I will say to answer, I think maybe, I don't think this was in the spirit of the question, but I think it's clear to it, measuring is hard. It's really hard. Like Chris, you've got co2.js And that does a great job of kind of network, but even then you have like multiple flags if you wanna use it in this mode or this model or this assumption.
Like, I love, I use it all the time these days. What did you say, like, all models are bad, some are useful? Yes, I do think that calculating an SCI score, which includes memory, IO, network calls, all the other factors in software is challenging, and I will acknowledge that, but it's also something that a lot of people are working on, and I think we're working on that with things like the impact engine in the foundation, and Chris, you're working on it with the co2.js.
Arne is working on it from Green Coding with those models. Yeah.
Chris Adams: with GMT, the Green Metrics Tool.
All right,
Asim Hussain: metrics, oh yeah, yep.
Chris Adams: Hopefully that should give plenty to refer to. I'll add a couple of links to what this SCI is to make that a little bit clearer, so for people to understand what that might be for that question. Should we jump on to the next question actually, Asim?
Asim Hussain: Yeah, sure,
Chris Adams: Does the GSF have any real examples measuring the SCI on pipelines of CI/CD? That's a soup of different letters there, but as I understand it, the GSF being the Green Software Foundation, SCI being the Software Carbon Intensity is a way to measure the carbon footprint, and CI/CD being continuous integration, continuous delivery, like automating the process of getting software out for people to use, all
Asim Hussain: mm hmm, yep,
Chris Adams: All right, so now that we've explained what the question meant and unpacked some of those, all those TLAs, three letter algorithms, do you want to have a go at this one? Because I can add a little bit myself with some recent work that we've been doing in my day job.
Asim Hussain: Yeah, so definitely, I'd say there's two things, is that A, a lot of work that goes on is also just behind closed doors as well, and that's one of the things that I find interesting about this space is that sometimes you'll just never hear of it. So, in terms of real examples of measuring SCIs, so there's a project called the SCI Guide, which has a number of case studies inside them, where organizations are really trying to document what they're doing and revealing the numbers.
Revealing numbers is very challenging for a lot of organizations, I can attest to it. You have to go through so many levels of approval to reveal your number. So there's, we've only got a couple of examples of those, but there's definitely tooling that we're building to make this a lot easier. So we're building something called the impact engine framework, which is a framework, which is what CarbonQL is now called the impact engine framework.
So if you've heard me say the word CarbonQL, it's now called the impact engine framework, and it's a tool with a manifest file and you can use it to calculate the emissions. And you can say, I wanna use co2.js I wanna use cloud carbon footprint, I want to use green metrics, and you wanna use whatever.
And it helps you measure an SCI score. And where we're starting to think now is we'd like to get to the point where, there is a GitHub Action, basically, the dream world I have is in six months time, thousands of open source repos all over the world, just drop a configuration file into the root of their repo, add a GitHub Action, and they're measuring an SCI score for their product.
It's been two years now in the making of even the specification. We couldn't have done this two years ago. I feel like so many pieces of the puzzle are now coming into place where people can really, very easily, with an hour's worth of work, measure the emissions of a piece of software, and that's where, so yeah, the CI/CD thing is coming, I would say, in six months time, at least from our side.
And it sounds like you've already got some work anyway from the green coding, green coding landscape,
Chris Adams: yeah,
I actually didn't know about the impact engine. That's, that's new to me as well,
Asim Hussain: yeah.
Chris Adams: The thing that we've been using, so with my day job, one thing we've been doing with a open source project called Wagtail, we've been working with some of the core developers there, and on the Google Summer of Code, a couple of early career technologists who have basically been, who I've been mentoring to introduce some of Essentially like green coding features into Wagtail itself.
Now, the last release of Wagtail came out, uh, in beginning of August, actually the end of July. Now, Wagtail is a content management system, a bit like WordPress, but unlike WordPress, it's written in Python and it's actually written on top of a, a, a software library called. Django, Which is what our own platform uses. Flagtel was used by a number of websites with NASA. If you visit the NHS website, you're using a Wagtail website. There's a number of ones that it's in using. And what we've been doing is we've actually We got chatting to the folks at Green Coding Berlin, which is pretty self explanatory, what they do, they do green coding, and they live in Berlin, we got chatting with them about this, because we were trying to understand, okay, if we're going to make some changes, are we going to be able to understand the environmental impact of, are we making progress? They also have a very literally named tool called The Green Metrics Tool. Can you guess what the Green Metrics Tool does, Asim?
Asim Hussain: I don't know, man, it's hard with these, these terms. Does it, does it generate green metrics in a tool?
Chris Adams: Oh, dude, it's so German. I live in Germany. This is like, to see
Asim Hussain: What's it say in German? Say it in German.
Chris Adams: no, I should, we don't actually have,
it's, it's, you know, the Green Metrics Tool is what it is in
Asim Hussain: Okay, all right,
Chris Adams: So, I think GMT is what we end up referring to it,
Asim Hussain: Oh, that's quite funny. Greenwich Mean Time. Greenwich Mean Time as well, yeah, yeah.
Chris Adams: We've been using that and. The thing that I think is quite interesting about what, uh, the folks at Green Coding Berlin have been doing is they've realized that, okay, there's a bunch of open source tools, op open source software in the world. So they've been basically forking a bunch of open source tools running this.
And then whenever there's a kind of CI run, they've been measuring some of this and, uh, they've actually got a project called Eco CI, which basically is like a GitHub action that fig, that measures the power used when you do a kind of, run as it were, a CI run to, to test something. So they've got some of these figures here and the thing that they've been doing, which we found quite useful as well, is they've been using a tool which allows us to run through common scenarios.
Like I go to a website, I browse through a few places. I search for something, I submit a form, I upload, something like that. We've got a set of journeys that we follow and we're using those as the kind of sample ones to as our kind of baseline to see. Is the work that myself and Aman, the student I've been working with the most, is the work that we've been doing there, has it been helping or has it been not helping? Because the particular piece of work that we've done recently is introduce a support for a new image file format. Called A V I F instead of just using like JPEGs and massively reduces the typically halves the size of any, any of the images that you do use. But there is a bit of a spike in energy usage compared to what you would normally would use both on the server or on the browser.
So we're now actually trying to run this in various scenarios to see is this actually an improvement on this? Because even though it results in a nicer experience, we're trying to make sure that we're going in the right direction. So that's one of the things we have. There's a couple of things we have going on as well.
But that's the kind of most concrete example that I might refer to. And there's a couple of links to both the output from this, but also the open source projects, because you can mess around with some of this stuff. Pretty much right after this podcast, if you really decided.
Asim Hussain: So this is the stuff that is using direct measurement. So you're forking it, running it on like a special rig that is like measuring it. Yeah, I think that's, it's interesting. I feel like this is like something that's been in discussion with the SCI as well, but we never landed on some good terminology for it.
I think we use measurement versus calculation. And we try to say the word measurement like direct, like what's happening in green coding, like direct measurement uh, something from counters or from a power meter or something like that, whereas we use a calculation is when you are just taking some sort of, we, we call it now the impact observation.
You take some observations about the system and you're passing into a model and getting an estimate of emissions. So I think we, I think the language here has gotta get a little bit more specific. I remember on the calls we were even asking, academics, whether there was like specific language around this and it wasn't.
Maybe the, maybe one of the listeners can say, actually ask him what you're describing is the word for calculation is X and the word for measurement is Y. This is, this is where we're getting to, and I think this is where the conversation is in this kind of generally metrics area. One of the reasons I'm exploring modeling is actually for a very interesting use case, which is once you model, you can simulate.
So once you've got a model, you can then tweak the model and say things like, so one of the things we're exploring is like, what if you were to change some aspects of the system, you've got a model, so can you then model that change, and then estimate the emissions reductions. And that's where like modeling has an advantage or modeling has a real disadvantage In the fact that it's a model and you're not really going to get a great actual measure.
So I'm not too sure, we don't have the answers. I just think this is an interesting question. It's like measurement versus calculation and I haven't fully formed my thoughts on this yet as well. But I think it's going to be an active bit of discussion for a while. Maybe it has been an active bit of discussion.
Maybe I'm just really late to the conversation.
Chris Adams: I'm not sure myself, to be honest, but we'll need to
see. The thing I think should be relevant, so when we were using this to figure out whether we're making things worse or better inside Wagtail, I asked Arne about some of this, okay, how are you actually coming up with these numbers? And they basically do things.
Yes, they have a rig, they've got like a bunch of machines that they have where they're reading the data directly from that. But they've also been doing a bunch of work with some of the underlying data that's published by various chip manufacturers. Something called the Spec Co. The
Asim Hussain: Best spec power? Yeah,
Chris Adams: yeah, And the, I've shared a link which basically goes into stultifying amounts of detail about what they do. They've talked, spoken about, okay, this is the tool that's used by green pixie, by cloud, carbon footprint, by TEEDS, like a French advertising company who've been trying to figure this stuff out, and they've. Basically share their modeling of it, which could presumably be consumed by Kepler as well. So they're trying to build these models because they don't have access to the underlying data. And this is something we spoke about in the last episode and the previous episode before that, about why it's a real challenge to get these numbers from especially large hyperscaler providers who. Basically, we'd really like to have much more control over the language. And in many cases, they give honestly quite good reasons for saying, look, share these figures. They are citing reasons like commercial confidentiality or an attack vector. This is why I'm quite excited about the Realtime Carbon project, because it's a chance to finally
Asim Hussain: the values.
Chris Adams: of that.
So you can actually have some meaningful numbers. So you can say, are we making it better? Or are we making it worse? Because even now, in 2023. Getting these figures is a real challenge if you're not running your own hardware.
And I guess, I assume, now that you're working at a company that makes the hardware, or makes much more of the hardware, that's a different change for you now, you see more of it from the other side, right?
Asim Hussain: Yeah, I do get and I speak to a lot of people now. And in fact, actually, one of the things that maybe would be useful to have a deep dive on spec power, if you want to have an episode, I can definitely bring some people is one of the people in my team, she's been spending a lot of time really getting into the weeds.
And it's fascinating working with people who build CPUs their entire life, because it's a different like, You think, Chris, we just write some variables in a Visual Studio code every now and again and claim to understand technology. Once you really get under the seat, there's a lot going on. That we are so abstracted away from and like one of the conversations happens all the time inside Intel is like how do we close that gap between what developers are doing versus what the hardware can do to be more efficient.
And I think there's the, there just sounds like there is just this chasm of opportunity here, which we're just not taking advantage of. A lot of the stuff that's happening on the intel side of the equation is just making people optimize their code. That just, but like using standard kind of optimizations that have been available for ages and a lot, there's a lot of just understanding that I don't even understand how a CPU works sometimes, like the energy curves just do not make any, any sense to me.
I'm not going to go into depth as to my lack of knowledge of what CPU is, but I could definitely bring people in who are much more knowledgeable than me. And then maybe let's have a deep dive into that. I'd be fascinating conversation, like really get into a chip.
Chris Adams: Yeah, because the thing that we've, the thing we're seeing from the outside, or the thing I've noticed from the outside, and I've seen other people also referring to, is the fact that- do you know how we had this thing back a few years ago where engines had like defeat devices where if they're tested, they're gonna work a certain way and they really are. It turns out that you often see some patterns a bit like that whenever you have benchmarks. 'cause if you design for a benchmark, you might not, it might not be designed. You, you could, there are scenarios where a chip will work a certain way that will make it look really good in the benchmark. Uh, and that might not necessarily be how it actually works in the. In the real world basically. You've got that happening a lot, lots of cases. I would really love to deep dive into that because this is the thing we struggle with and it's weird that say most chips are most efficient, like at two thirds capacity between two thirds and three quarters, right? Rather than, so you might think like you got, if I turn it all the way down, that will turn all the power down. No it doesn't work that.
Asim Hussain: It doesn't. Yeah.
Chris Adams: And there's all these other incentives about where you move computing jobs as a result, which has this kind of knock on effect.
Alright, we've.
Asim Hussain: There's actually really interesting work around like when we talk about moving compute around different parts of the world, there's actually a really great project being open source project run through Marlow Weston, who's one of my colleagues at Intel, and she's also one of the chairs of the CNCF environmental tag and I'm going to get the name of our open source project wrong. I think it's Kubernetes Power Mode. And what it does is it does like load shifting across cores on the same CPU. So normally when you, like, you want to max out one core before allocating work to the other cores. That's the most efficient way to go up the curve.
But most like allocators will just allocate them across all the cores on average. And so she's built this kind of, uh, Kubernetes, uh, scheduler, which basically will max that one core at a time. So you get to the top.
Chris Adams: Wow, I didn't know that was possible. That's a bit like how cars, so certain cars would be, if you've got a car with maybe a V8 inside it, there are some cars which will basically just run on four of the eight engines, eight cylinders firing all eight for fuel efficiency. That sounds like the kind of cloudy equivalent to that idea.
Asim Hussain: But there's also, but she's, she's actually got a second Kubernetes project I'll get the link to, which allows you, to change the clock frequency of your chip at the application level, so with the intention of; if you can change people overclocking, you can actually underclock, and underclock actually does this amazing thing where you get much more efficient from an energy perspective because everybody's looking at like reporting what is the like peak level efficiency but if you can just say look i'm willing to run at 20 less clock speed you actually gain more than 20 energy efficiency improvements but you lose that on the performance.
So if you can dynamically change the clock frequency, which happens a lot on like laptops and mobile devices, it does not happen on the cloud space. It has lots of negative consequences as well. Lots, yeah. You really can't just do it without knowing like how an entire stack works top to bottom. It's a very advanced piece of thing, but if you can take advantage of that as additional efficiencies again, reducing that chasm between what we developers think we know about tech and the hardware versus what hardware actually does is I think one of the frontiers of this space.
Chris Adams: This was actually something Arne explained to me, he was looking at why some of the figures that say, we spoke about a project called Scaphandre last week, he says that one of the reasons that, one of the things that's difficult about this is that, yeah, like you said, the clock speed can go up and down, and he, the kind of mental model that I ended, left the conversation with was a bit like, revolutions per minute in an engine, so you can have it red lining to go load really, really fast.
But if you scale it right back down, then you can be somewhat more efficient, but there's going to be impacts. I didn't realise that you had that kind of control with a software level itself. Actually, you could deliberately- I thought you could only just ask the CPU for work to be done rather than say, can you do a bit, cus that's that's not like nicing something. That's a different level of
Asim Hussain: That's a whole different level. Nicing is probably... No, it's not like nicing something. It's a very different level of hardware control. Yeah.
Chris Adams: All right. Wow, we went really deep. Not expected enough. Okay. Okay. Bye. Okay, so hopefully that should help the question that asked,
are there
Asim Hussain: even the question? What was even the question?
Chris Adams: there examples of measuring the SCI in pipelines?
Asim Hussain: We went off!
Chris Adams: Yes, there are examples of it. There's lots in the open. The work from Green Coding Berlin is probably some of the stuff that's really in the open. But there's also work done behind various corporate firewalls that you might not be able to see, or you might probably can't see unless you employ all kinds of industrial espionage, which I suspect you're probably not going to do that if you are good at that. Anyway, okay, let's move on to the next question, it seems because we're burning through our time.
Next question was about the carbon efficiency of GPUs. This seemed to be a question of basically saying what's the carbon efficiency or otherwise of GPUs when they're used for like owner respect search and stuff like this, and is this good for the environment? This is the question that I got, and I assume this was a response to people talking about the fact that with this new world of generative AI and LLMs, you use lots and lots of specialized chips, often, which look like GPUs or sound like GPUs. Do you want to have a quick go at this assume, and then I could probably
bounce on some of this, because I just, yeah.
Asim Hussain: Let me say two things. A, If you're using the generalized CPU, which is specifically for generalized and for anything else, so it will be more efficient on an energy basis. I would say the point though is when you start using GPUs and you start using specialized hardware, each of them has an idle power amount.
And so if you've got a GPU and you've got a whole series of them, or all this is the specialized hardware and you're not using them, that's actually bad. And so it's very important when you have this specialized hardware, like you're thinking through and you're thinking, I've got it, I'm using it. That's why I've got it.
Obviously, if you're in the cloud, it's a different equation, right? Maybe not, actually, if you can just order a GPU and not really use it. And the other thing I would say is, is, and I've seen this conversation go a little bit wonky as well is when oftentimes the total power of a system increases. 'cause a GPU consumes more power, and then people just say, oh, it's just, it's less efficient, it's consuming more power without factoring in that like a job will run faster and therefore the total energy will be less.
If that makes sense. I've seen conversations get into confusing territory and people have confused energy and power. 'cause power is like just the Watts per second, whereas the total energy, so if you're using so that, that's another way
Chris Adams: You're
Asim Hussain: about carbon efficient. Yeah. Was,
Chris Adams: being that you might have a GPU, a graphics processing unit, which is extremely energy intensive, but it runs a job for a short period of time and therefore it could be turned off or could be scaled back down. Right? That's the thinking. That's what you're saying, right?
Asim Hussain: I dunno if they can be turned off, but I think they're always on, aren't they? I don't know. Actually. I have no idea. But yeah. Are the ones that turn off?
Chris Adams: You can see there is there, there's a definite, uh, impact between something running a hundred percent and running and when it's idling, there is a change.
But I'll be honest, I'm outta my depth when it comes to figuring out how many compute, how many people who run data centers switch them off on a regular basis.
I suspect the number is very low.
So,
Asim Hussain: close to zero.
Chris Adams: yeah,
I was actually going to answer this differently.
Asim Hussain: Oh, go on then. Yeah.
Chris Adams: say that if you're asking, if you want to talk about the carbon efficiency of GPUs compared to like CPUs or something like that, it's worth understanding that the emissions will come from two places when you're thinking about this.
There's emissions created from making the actual computer, and there's emissions from running the computer. And when you make something which is specialized for the GPU, for example, that's going to be pretty energy intensive. And in many cases, you have a bit of a trade off, right, where if you, if you basically had a bunch of CPUs compared to GPUs, if the GPUs are more energy intensive to make, then if you don't use the machines very much, then you don't have much usage to amortize the kind of cost.
So that, so in that case, GPUs are going to be pretty inefficient, they're going to be pretty carbon inefficient. But for the most part, because these things are so incredibly expensive, they tend to get used a lot or there is an incentive to use them as much as possible. And even if you're not doing them, to make them available for free, uh, for people to use these or at least try, try and grow a market.
And that's what you see right now with, um, things like, uh, various tools like chat GPT and stuff like that, which lots of us are not paying for. The use of that results to a massive amount because you want to re receive a to achieve a certain amount of utilization, so you can actually get any kind of return on this.
The thing that I would actually draw your attention to or thing that might be worth looking at is recently we had the conference Hot Carbon, and there was a really cool paper which was specifically called, which addressed this, the title of the paper was called Reducing the Carbon Impact of Generative AI Inference. There's a number of people who are named on this. So Andrew A. Chien from University of Chicago and Argonne National Laboratory. Hai Nguyen, Varsha Rao, Tristan Sharma, Rajini Wijayawardana from the University of Chicago, and Liuzixuan Lin, I think, right? This was a really interesting talk. I think because it was basically looking at the environmental impact of tools like, say, AI, and saying, okay, we've got this whole kind of trend of employing LLMs, and large language models, and generative AI in searches and things like that.
What does the impact look like? And they basically looked at, say, the usage figures that were published for ChatGPT in March 2023 and that was like 1. 6 billion, like users. And then based on that, they, they they modelled the likely inference cost, which is the cost from using it, and the training cost.
And the thing, there was a few kind of takeaways. First of all, we often talk about the training cost as the big thing to be aware of. And they said no, like the training was 10 times the impact. And they said if you were to scale this up to say, Google's usage, then even if you had a training cost of about, that's going to have a ginormous impact basically. So we should be really thinking about the inference part, and in this case here, having something like a dedicated fast machine that does the inference, compared to a bunch of CPUs, for example, is really cool for a bunch of other reasons.
Asim Hussain: Yeah, and I just want to say, I think two things with the increased adoption, interest, usefulness of AI. Influence is going to go through the roof, as you said, it's on and the only place it's going to go is higher. The interest is going to go as higher as the years go on. As I've said before, nobody invests billions of dollars into AI if there's not a growth sector.
People aren't going to use it and more people are going to use it. That's inference. That's why inference is very interesting. That's going really high. I just want to say, I just completely forgot about the Hot Carbon Conference this year. I watched every single talk in the Hot Carbon Conference last year.
And let's put it in the show notes because I think last year's program was amazing. I watched every single video. I made copious notes on all of the, all of the talks, and I'm, I'm looking forward to going through it again this year and doing what you did. Sales and just listening to all of 'em.
Chris Adams: Yeah dude we had some of the people, we've had the speakers from the previous talks because there've been so many really good ones. The thing that I really liked, I just wanna come back to this one because I think there's some really nice things that came from this. This talk in particular in this paper. One of the key, key things was, is basically saying, let's assume you're gonna have this massive increase in usage. And I think the comparison was, they said if you were to scale the usage of chat GPT up to the kind of modeled usage, In, in this paper for say
Asim Hussain: Oh,
Chris Adams: mainstream search engine, a 55 times increase in use. If you were to scale it up that way, you might think, oh, crapes, that's 55 times usage. Assuming this is like in 2030, and then ev this, they basically tried to project this forward into 2030 and say, well, okay, what would the look, would it be that in 2030 we would've 55 times a carbon footprint if you did this? They basically projected, they took some trends and extrapolated them forwards. One of them was that you're probably going to see an increase in energy inefficiency over time because we have seen in moore's
Asim Hussain: sorry, you said energy inefficiency, did
Chris Adams: So energy efficiency. So they basically said, let's assume between now and 2030, you see a 10 times improvement inference, and that's based on what we've seen so far in terms of things keeping, keeping getting more efficient. Let's look at the carbon intensity of the grid will also be decarbonizing over time and they took some from current trends and what's actually especially been coming in with changes in policy and they basically said with these numbers is it possible to do something about these figures and what would the figures be if you were looking at this in 2030 in the next six and a half years and they basically modeled some of this and they modeled- they, they did this as a way to figure out the actual savings possible by using things like carbon aware programming, and one of the key things they said was that because inference isn't super latency sensitive, because of the actual on the machine in the actual chips in some distance, say machine doing a bunch of inference, then piping the results to you. It's not so latency sensitive and that means that you can quite easily run this in lots and lots of greener regions, even if you're accessing it from a place where the energy is not so green, let's say. Using this versus what we have right now. They, they we're probably not gonna have a massive increase with, I think the figures that I saw
Asim Hussain: Oh, so they,
Chris Adams: versus, yeah.
they basically said, based on this, if we were to employ, let's say we, let's assume you're gonna have machines becoming more efficient anyway, and you scale up this much usage, if you were able to carefully run the inference and serve the requests
Asim Hussain: Oh.
Chris Adams: the greenest regions.
Asim Hussain: But that's the assumption. The assumption is that you have to actually be green, do green software to decarbonize a software. If you actively, so it sounds like if we did everything we're asking you to do, we'll be flat. Do they have a number for what if people didn't do?
Chris Adams: Yeah they basically said, assuming if you didn't have any energy efficiency improvements, they said 55 times load will be 55 times a footprint. They said if assuming you have the efficiency improvements increasing at the same rate as they have been, you're looking at maybe With an uplift of 55 times the usage, you'd probably be looking at 2.6, two and a half times the
energy usage, I mean, of the emissions from the grid, right? But they said, if you were to actually use the learn,
Asim Hussain: Carbon
Chris Adams: programming like this, they brought it down to like, the ideal scenario would be you're looking at 1.2, which
Asim Hussain: But that,
Chris Adams: kind of mind blowing...
Asim Hussain: well, it's mind blowing, but I think it shows how important the work that we're talking about is. It's like, actually, it's one of the really great talks from last year's Hot Carbon, which I loved, which was, I've forgotten, I've got to apologize. I'm not going to remember which one it was.
But it was talking about how projecting forward kind of compute growth and how green software was a way of being able to handle the additional usage and load of the cloud without actually having to build more servers, because fundamentally we are constrained at the rate with which we can actually increase the cloud, but the growth is growing significantly as well, so like being more efficient actually allows you to deal with growth. You have to be green, you have to use green software if you want a realistic chance of generative AI being as ubiquitous as you want it to be.
Chris Adams: I mean, the other thing is, you don't have to assume that they have to be there, like, yeah, you don't, maybe, like, the option is, don't, you just don't need to buy all this equipment in the first place. These will never be a replacement for actually having better data.
Asim Hussain: What if they're just humans in a building that's answering your question? Is that more efficient? There was a Gartner thing I saw recently which is that the total amount of energy used by AI by 2025, so Gartner report, will be higher than the total amount of energy used by the entire human workforce in the world.
Chris Adams: I, I, I, I would, I don't know enough about that. And I feel a little bit worried about referring to that. But the point I was going to get to was the fact that you're seeing examples where Actually, just having good domain knowledge, it turns out to be much, much more effective than having loads and loads of compute.
And the good example that I've linked to here is actually, there's a company called Lesan, they're based in Berlin, and they do machine learning specifically for Ethiopian languages. And they outperform Google Translate, they outperform some of the large providers, because they've just got access to the actual benchmark data sets from the first place. This is the thing, having quite high quality data is another way to reduce the amount of compute used. And this comes up
Asim Hussain: true. Yeah, very good point. yeah,
Chris Adams: and this is also when you bear in mind that even just the whole tokenization that you have when you're, it's based around English language and so even another language is gonna have, we're gonna gonna need more tokens for the same amount of sentences. So there's a whole bunch of issues there that we might refer to.
Alright, so we, we dived quite far into an efficiency of GPUs and we might think about that. I think we've got time for maybe one more question left before we have to wrap up Mr. Hussain.
Asim Hussain: Okay. You pick it.
Chris Adams: Okay, so this one is, this is a question about water usage. Can the cooling water for data centers be reused?
And this is a question because people...
Yeah, actually, I think one of the worries is that people actually... In many cases it just gets pumped back into rivers when the water is that much hotter, you're basically just cooking the fish, which is not...
Asim Hussain: Sorry.
Chris Adams: not very helpful.
I
Asim Hussain: it depends if you like eating, I suppose it depends if you like eating fish,
Chris Adams: don't think it's good. I don't think the fish enjoy this, right, but basically there is- that's one of the issues, but I think this is more actually a case of this is speaking to the fact that in many cases, 1 of the big things that's come up is basically people talking about the water usage with compute, and in particular data centers where, which are very heavy on, uh, generative AI and things like that. And there's a really good example that we might refer to that I learned about, which is Google and some of their data centers in Chile over the last few years. There was a whole thing where you. So in Europe, for example, where there's lots and lots of water, you don't necessarily, or there's parts of North and Western Europe where if they're cold, and they already have lots of water around them and lots of rainfall, then it's not so much of an issue.
But if you were to put a data center where there's loads of drought that uses lots and lots of water, the examples, there's a company called Algorithm, organization called Algorithm, which we spoke about some of this, because you see protests against data centers. One of the key things was You find some data centers using something in the region of 169 liters per second. Now, if you run that in a place which has drought, maybe not the most equitable use of a scarce resource, especially for the people who rely on that water to live and survive. There are other examples where large companies have come in where they've ended up using significant amounts of water. The thing that was interesting about Chile was that Google wanted to deliver a deployed data center here. They had a bunch of pushback, but then they ended up choosing to use much, much less water intensive technology as a result, like I think it's adiabatic cooling, which is essentially a kind of closed loop system, which doesn't rely on evaporating water than getting rid of the water as a way to cool things down. This is one thing that came up and I've I have added a couple of links to both Algorithm Watch talking about this, as well as the actual organisation, the activists in Chile, talking about ok, we had a victory for this. The fact that, yeah, they are issues around it, but it's also a case of companies, they can make these choices, but a lot of the time, they might not choose to, because it's a little bit more expensive and here you feel like if companies could be making a huge amount of money, and Google spent 60 billion on share by buying its own shares last year, they're going to have fairly efficient, less water intensive cooling in a place where there's that's suffering from drought. This seems a fair thing, like these things we should be asking for and should be setting as a norm. There are other organizations doing this too.
Asim Hussain: What do you think, one of the things, I've got nothing to back up, one of the things that was hinted to me the other day, I think it was Sarah Bergman who might have mentioned on Twitter, that there might be situations where it's mutually the opposite. Being more carbon efficient might actually make you more water intensive.
Like for instance, doing things that reduce carbon emissions might require more water consumption, and which is why I think it's exciting that we're actually all starting to have this conversation right now, because I think we're so focused on carbon, and we're optimizing for carbon, but actually, the landscape is much more complicated.
It's much more of a surface where you're trying to minimize the environmental impacts of your choices. And you might have to make trade offs versus one versus the other. If there's a water scarcity right now, you might have to increase your carbon emissions. I'm excited that this is where the conversation is evolving to.
Thank you. Because once we add water to the mix, we can add other things.
Chris Adams: You see a trade off for sure, but in also, lots of these, ultimately, it comes down to capital expenditure.
Lots of the
Asim Hussain: it can be an And
Chris Adams: very, like, yeah,
like,
Asim Hussain: an and. Yeah, yeah.
Chris Adams: you are seeing this, but it's also worth bearing in mind that when you're looking at this, impact comes from the energy generation in the first place, because let's say you're going to burn a bunch of coal to heat up a bunch of water to turn to, to generate some electric is a huge amount of water being used there.
In fact, freshwater usage in energy generation, I believe it is actually the number one source of water usage in America. So we, when we talk about this, it's also worth thinking about the entire supply chain. Yes, there are absolute things you can do at the data center level. Also, if you look through the supply chain, there's also other areas, but typically with data centers, it tends to be very localized. So there may be water being used, but if it's water being used in a place where that people are depending on for drinking water in the same town.
You can understand why people are a bit miffed, basically.
Asim Hussain: it's like, we don't really think of data centers like coal power plants, but like, it's almost just the same. Like we treat, we treat, we treat, we treat them as very different. But at the end of the day, like water is a, is in this, in this case, could be a pollutant.
Chris Adams: Yeah.
Asim Hussain: If you're pumping hot water out, I don't know, I do not know enough.
Please don't quote me. I don't know exactly what happens here. I do not think that data centers are like, maybe they are like squirting like hot streams of water into rivers or something like that. But I'm just pointing out that you often feel like some things are like abstracted away from a mission so much you don't really associate it with the entity.
But like with a coal power plant, we just so associate it with emissions that we know what to think about it, how to think about it. But like a data center in a way is it generates emissions. I'm sorry if it is. Putting like hot water into rivers and streams. Isn't that a pollutant?
Chris Adams: Well, yeah. THere's all kinds of pollutants that you have. There's noise pollution as well.
There's very, that you might need to take into account when someone's citing big pieces of infrastructure because this is industrial infrastructure.
That's the
Asim Hussain: is. Yeah.
Chris Adams: Like there are cases of the. people having a really hard time with just the wiring and the noise pollution from data centers crypto mining rigs
Asim Hussain: really, you can you hear, if you live now, you'd be able to hear whirring
Chris Adams: I'll share a link to an example from um there there's there's an interesting case with amazon uh specifically where there's a there's a bunch of people who are basically complaining about the noise pollution um in i believe it's I think it might be West Virginia,
who are,
Asim Hussain: Yeah. There's semi
Chris Adams: where they basically hear this because it's loud enough, but you also see this with cryptocurrency mining in New York State, there's been lots of cases where you have typically the really quiet, serene places, where the calm has basically been punctured by the incessant whirring of,
Asim Hussain: like
Chris Adams: of all these things, yeah, exactly, so there's various dimensions that you would need to take into account that go beyond just thinking about carbon and carbon tunnel vision, but let's be honest dude, like, Most of the time, organizations struggle with just thinking about carbon as well as cash, right? So it's, it
Asim Hussain: Let's add water and noise to it though, Chris. Let's give, let's give him everything. Yeah.
Chris Adams: and the, what I'll do, I'll add another link, because there's some really fast, fantastic work by Sasha Luccioni, who's the climate lead at Hugging face. She wrote a really good piece in Ars Technica, talking about all the various things you need to take into account with the environmental and social impacts technology and specifically, um, AI. It's a really nice way in. And, oh, I should actually share, um, my organization brought, published a new thing, uh, this week, A new issue of Branch has come out and it's got a bunch of stuff talking about this from a, from a Tamara Kneese. She wrote some, she wrote about some of this, but also Dr. Theodora Dryer, she, she wrote a piece about's, also an expert in. We'll show a link to that 'cause that that would be fun for some, for some people as well.
Oh, blindly. We've gone way over actually Asim.
Asim Hussain: That's good. That's good. Great episode.
Chris Adams: We answered those questions, or at least we've peppered this, uh, these show notes with huge amounts of links to people who might wanna learn more about this and hopefully we've get add added some tantalizing hints. Asim, I think we're actually at our time, we've got through four questions this time around. I think there are some more, but in the meantime, I think I'm gonna have to say, Thank you for coming on and wandering through this with me. Yeah, this was fun, man.
Asim Hussain: Yeah. It's good to see you guys. I love these, I love these mailbag episodes. Let's do more of them.
Chris Adams: Yes, I want to ask you a bit more about the Impact Engine next time as well, because I didn't know about that.
Asim Hussain: Give us, give us a month and I'll, and I'll, and I'll be able to get into a lot more detail about it with you. Yeah,
Chris Adams: Okay, cool. Also, if anyone who's listened to this is curious and has questions of their own, please feel free to at us in various places or even come to the new discussions. The new Green Software Discussions website. I might ask you to point to this because otherwise I'm going to podcast.greensoftware.foundation
Asim Hussain: We'll put it in.
Chris Adams: address that we normally use. Is it visible? Is there
Asim Hussain: do you know we should create a short link? We, we should create a short link, but there isn't, if you actually go to our GitHub organization, there's just a tab called discussions. But you're right. We'll, we'll put it on our website and we'll make sure it's more prominent in the future here.
Chris Adams: Okay, in the meantime, go to https://podcast.greensoftware.foundation. Most recent discussions where you can ask some questions and then we may if we can fit them in the list, we'll add all of them so we can add other things coming through.
All right, that was us. Lovely seeing you again. Hope the mushrooms are well, and yeah, see you on the flip side, okay?
Asim Hussain: See you then, buddy. Bye.
Chris Adams: Hey everyone, thanks for listening. Just a reminder to follow Environment Variables on Apple Podcasts, Spotify, Google Podcasts, or wherever you get your podcasts. And please, do leave a rating and review if you like what we're doing. It helps other people discover the show, and of course, we'd love to have more listeners. To find out more about the Green Software Foundation, please visit greensoftware.foundation. That's greensoftware.foundation in any browser. Thanks again, and see you in the next episode!

  continue reading

87 επεισόδια

Όλα τα επεισόδια

×
 
Loading …

Καλώς ήλθατε στο Player FM!

Το FM Player σαρώνει τον ιστό για podcasts υψηλής ποιότητας για να απολαύσετε αυτή τη στιγμή. Είναι η καλύτερη εφαρμογή podcast και λειτουργεί σε Android, iPhone και στον ιστό. Εγγραφή για συγχρονισμό συνδρομών σε όλες τις συσκευές.

 

Οδηγός γρήγορης αναφοράς