The Metaphysics of AI with Eli Laird, Explainable Artificial Intelligence PhD Candidate

thank you so much for taking the time on this Tuesday to come on to the podcast I appreciate it of course thanks for having me of course um so for listeners at home that don't know anything about you give us like a little brief who you are what you do okay just a little Spiel yeah so uh my name's Eli obviously um I am a PhD student in AI at SMU or Southern Methodist University in Dallas and I've been studying computer science and AI for about six years now um not my PhD my PhD just started so um but all of your undergrad and Master's research so I've been like building up to this point so I can finally get over the road but um yeah so I've been studying computer science for this long I'm really interested in science and how we can build like artificial intelligence is that even possible right right so that's kind of like my whole goal and yeah so that's that's me I love that um well thank you for coming on the podcast and spending time it's it's not an insufficient or insignificant amount of time so thank you I appreciate it um and on that note it's kind of kick off our conversation I just like to start with one thing you're grateful for it could be anything that comes to mind I think I'm really grateful for just the people around me you know um doing like a lot of school or doing a lot of anything that you love like really takes a lot of support there's definitely no way I could have done it without my family my friends girlfriend everyone uh just being supportive and like pushing me to reach my goals yeah well and I like how you frame that about even when it's something you love it can be difficult absolutely oh yeah yeah that I heard a quote the other day that was like hard work for something you don't love is stress but hard work for something you love is Passion but that doesn't mean that it's not hard grueling work I love that quote that sounds great but yeah it is it's definitely there's no way to get around the hard work yeah so best to have it with other people yeah so I love that that's wonderful now to go a little bit more in depth about you and before we really dive into the topic of today's episode which is going to be AI because you are the AI expert that I know um I thought it'd be really cool to have someone to discuss that especially since it's so big in the Zeitgeist in technology and also in education and everything right now um but I would love to hear your story okay I know that's a big question but however you interpret that however you'd like to answer it what is your story you like let's see um without going to like specifics of like oh I lived here and well that's true whatever you want to share is perfect it's always been about kind of finding something I'm really interested in and then figuring out a way to get there and that has taken many forms throughout my life for example in like Middle School and High School it was about drops it was about didn't know that and so it's like how can I be on the on the Drumline of the marching line how can I be in this band how can I do this right and it was doing whatever it takes to get there um as I transitioned out of high school into the college it was just like okay what's my next thing that I want to do um whether that was get an internship or was it uh get into this master's program or even like figure out what you know I wanted to study for the rest of my life it was figuring out a way figuring out your own path to get there and I think I've just followed that idea of kind of building your own path my whole life yeah um and it's I think it's worked out so far yeah I would say so it's definitely more enjoyable than following someone else's phone yeah um so yeah I love that I think that's really wonderful um because a hundred percent I think there are so many like well-worn out there that you're offered to take where it's like this is what's recommended but those people aren't you yeah well so it's like how could any of those paths ever truly be the thing that is going to be right for you except for when you design for yourself exactly and also heard from an old Mentor of mine was like from a competitive standpoint if you're following the same path to say a company you were the second person to pull up at best you're the second person oh yeah so you're never going to be you know Pioneer in that in that sense that is such an insightful way to put that but that's so true even for people who want to do theoretically traditional things you still want to be the pioneer of what you're doing right and so it's like if you follow the same path you're never going to be the first person to do anything right and to me that just sounds boring but I think so right but I'm like that's the way that we agree not everyone does and it's okay if you're I believe if you're using someone else's path to or using Matt as inspiration or a foundation but it doesn't have to be the adult deal right and I mean there's no reason why it can follow a different paths and jump on different paths and it's all that's doing is just mixing everything into what you think is your passage right um You got to start somewhere yeah so start with something and mix it up and on that that you have to start somewhere you the kind of sentence that you use to describe your life was you were trying to figure out something that interested you and then figure out a way to get there and I think that even though figuring out a way to get there is really hard figuring out what interests you is arguably harder for a lot of people so talk to me about that process and what it was like discovering that like oh music and playing an instrument is something that really lights me up and now with AI and research like what is that process of finding something that interests you like that is an interesting question because I fortunately I don't know if it's possible all the time to figure out what you're interested in I think like everyone says follow your passion and I think really I'm grateful and lucky to have a passion for what I do I know a lot of people don't um I think the main thing was just trying a whole bunch of different things yeah like for example coming into college like AI was not what I was interested in computer science was not even on the question really like I came in as a mechanical engineer my dream or quote dream was to build race cars for formula one that sounds pretty great so um once I started that and starting the process learning classes and stuff I was like oh wait a second this is not

um so yeah that was a failed path but not really a failure but uh from that point um failing forward learning yeah yeah um I was able to I pivoted to computer science more so because by happened chance um my roommate was working on a computer science like he was in the major he was working on a on a problem for his class and he asked me to help him and I helped him and I was like shoot this is really fun I really forgot that I really enjoyed it yes it was kind of by chance that I got back into it right so I think a lot of it is luck but it's also just like opening as many doors as possible so you can choose the right one yeah look and opening as many doors as possible what would you say to someone that feels like they don't even know how to open a door like what is the first step for someone's like who's I've never even considered living a life that I'm passionate about I feel like that your doors can be anything it can be big they can be small a big one would be moving to a new city for a new job yeah that's complicated and a small one could be talking to someone in the bar because it's different or in coffee shop better example yeah um that door that person may have an idea that may you know capture you that you maybe pursuing for the rest of your life yeah so or be a relationship that completely extends your life honestly I mean the doors are the people because you know we're all this big Network and the only way to figure out what else is going on in the network is to start talking to people and so I mean that's how kind of I got into Computer Sciences I heard some friends taking classes I was like oh I'll give that a shot yeah and that opened the door and then keep talking keep asking questions keep asking questions I also love the idea that like people are doors right it's like the more people you meet the more experiences you're exposed to the more life lived that you can talk about and compile and eventually build whatever kind of life brings you Joy and purpose and I mean otherwise how else would you if you're locked in your room thinking yeah are we gonna know about anything else any other opportunities yeah folks are just words for people too so that's another way to do it so yeah 100 I love that so to Pivot a little bit more into AI um this is something that I am certainly not an expert in but I think that it has a lot of very interesting um consequences and implications for a couple different areas of philosophy one Consciousness and like the philosophy of mind I think is always a big one that's talked about but to kind of extrapolate from there this podcast is called making meeting how what does our meaning making change in our identity and personhood change the bigger and broader the AI gets so that's a huge question but let's start with the building blocks what would be your Baseline definition of AI for someone who has never heard the term before

I kind of borrowing from like definitions yeah please do the most Baseline Definition of artificial intelligence or just intelligence is the ability to

do a multitude of different tasks um based on information Okay so we can do that as humans we take in sensory data we take we see things and we do tasks based on that information right and so that link between say like vision and a task like walking the link that links those two is the intelligence almost okay it's like you can have all this information you want but if you can't transform that into action there's it doesn't really matter doesn't matter and so we naturally can do that and what we call Consciousness which we have no idea how that works right um how does it arise right so that's like human intelligence is our ability to do that artificial intelligence is intelligence built by humans now it's not restricted to that because obviously down the road if we build okay I build AI exactly so it's it's I don't want to say non-naturally occurring intelligence because how do you define natural at that point but yeah um in the end it's all intelligence but for now artificial intelligence is intelligence as I Define built by us humans built by hands and not quote unquote naturally arising yeah exactly nice so jumping to a little bit of a bigger question your main research in AI what is the biggest thing that you Center your focus on you recently presented at a conference in Lisbon about your research which is super exciting tell us a little bit about your main research your area of study your area of interest in the AI field because it's so vast in and of itself right it's a hard well we talk about picking doors part of yeah the process of doing phds what door do you want to do within the vast thing right um I landed on this subfield called explainable Ai and what explainable AI tackles is the problem of building AI systems that them themselves can explain to humans how they they are making their decisions and so this is really important because as we start building large-scale AI systems that start you know say driving cars or making decisions on jobs or right running companies um just as with the human we could say why did you make that decision we have to be able to do that with the other intelligence as well right so that is vastly interesting to me because I think like you said there's this kind of Baseline understanding of explainable intelligence where it's like why did you like two siblings who get in a fight right and the mom's like well why did you hit your sister it's like because she made me mad right you have a very a very instant explainable reason for why that action happened but if you kind of zoom out we don't have like you were talking about earlier an explanation for Consciousness how we actually operate and we don't actually have a very good way of explaining to anyone why I am the way I am right so putting that in context of AI your work centers on this first area of questioning right having an AI that could explain why did you not hit your sister but how were you able to do this math equation yeah so I think first off right now it's really Elementary so explain what AI it's very simple barely even cause an effect of like why is this picture of a cat not a dog something like that or even that's even more complicated than is this a cat and where is the cat in that image got it so um from those fundamental building blocks we can combine the other building blocks and hopefully get a larger explanation of like what is a cat or why it's a cat but I think down the road once we get more complicated systems the key to actually getting artificial Consciousness is to build a system that can't ask those why questions right um not only for the purpose of explaining to us but to explain to itself and so ask its own reflexive just like live yeah well that's how a lot of people Define consciousness in philosophy is well human beings have the highest level of Consciousness because we are able to ask why am I conscious right which I mean the very small question of like why did I what was your example like why did I hit my sister yeah

as a sister with a brother and you as a brother with two sisters that is clearly it about me example but if you zoom in too far of like why did you do this and you go into the the brain and see these connections fired or whatever that doesn't tell you much yeah so um and when we zoom in to that we don't really find Consciousness either yeah synapses and neurons firing right and that's why it's so complicated to define consciousness is because it's somewhere within that realm or even outside we don't really know yeah um almost so that's so interesting to me too in the sense that if Consciousness happens somewhere in between an electrical impulse in your brain and the action actually occurring that reminds me of your definition of intelligence that it happens in between whatever decision you've made and actually acting on it right yeah so it almost seems as though the question of Consciousness is embedded in what AI is yeah I mean I think well it's it's difficult because it's so hard to measure yeah you can measure the things on the ends but I can't measure the middle and that's the fundamental question of yeah yeah it's like how do we do that how do we know when we've gotten there right

yeah and so I think the only way we'll really know to ourselves is like like I can sit across from you and know that you're a real person right you you are there you have Consciousness and that's my Consciousness telling me that right so as soon as my Consciousness starts telling me that the AI systems have the same thing that's the best I got honestly yeah well and even that I think poses an interesting question for humans because they're actually this goes back to My Philosophy research which is I part of the reason why I wanted to have you on because I think there's a very interesting intersection here but there's actually no way for me to conclusively prove that I know that you are not a figment of my Consciousness right logically I know right that you are another person that has a full life and you are a full being that is conscious but can't prove that right and so it's at what point does AI take on the same level of Consciousness it's like well if I can't even prove it for myself that someone else exists like you said AI could already be there and we just don't know see that's the that's the really interesting part it's like is it already here yeah and we have no idea of even being able to identify like check mark yes it happened right so I mean we work very cause and effect we're very cause and effect Focus so like as any good scientist it makes sense yeah yeah but um we won't really know until when we start seeing some effects that right right which vary by The Institute that you ask yeah which I mean the whole field of AI is always divided on what does that even mean does it even required to have Consciousness to have an impact yeah which I don't believe so um would have an impact it's like something can be vastly useful and just because it doesn't have a Consciousness shouldn't mean that it's not an extremely helpful beneficial technology or extremely dangerous too yeah 100 for example like viruses are incredibly simple yeah they're not alive they're just little bits of code RNA right and they can cause tremendous pain they can cause pandemic sickness as we've seen exactly so a living firsthand experience but um that thing's so simple can do so much and so that really opens the open Sierra and the AI and talks of like you've got to be careful about the small things too it's not just the Consciousness that's going to potentially cause problems right it's the smaller things that build up to that too yeah so a couple things that I want to touch on before kind of rounding back to this question of Consciousness because that is the thing that I am just like consumed by on a daily basis is like why are we here right so I do want to come back to it but I think a lot of people they hear Ai and automatically are like scary scared bad no right and I think that like you said it's a really interesting balance because you want to be very careful and intentional about the work that you're doing to make sure that any Avenue isn't being unnecessarily harmful but also I feel like real consequences that happen like from things that we've seen in the past are usually accidental so when people come to you as someone who is like at the Forefront of your field and the Forefront of new research and this very hot button issue or not issue topic what do you say when someone is scared of AI um one say calm down

we're not we're not at the point in my opinion that we need to start fear-mongering and like oh my God we gotta shut down things right um we're at the point where we we need to start thinking about those those possibilities sure but adding in fear I believe adds more harm at the moment when fear prevents people from acting logically right exactly and so fear could even cause somebody to do those things that they were even fearful from for example like uh like a typical arms race yeah I'm fearful of this other country my adversary is going to be building this AI so therefore I'm going to build it right it almost causes like a self-fulfilling prophecy of fear yeah and I think we're at the point where we have two roads they don't necessarily diverge but where AI can cause tremendous like

impact on Health on the economy on the climate on it can stop well it's so broad that it can solve almost everything we can think of right because so could we we're yeah intelligence is solving problems so right um I think we don't know if the benefits outweigh the risks yeah because we don't know all the benefits yeah and we won't ever really but um we're not even at the fork yet so yeah we can't I think fear will cause us to quit before we even get started well and I think that with anything you never know if the benefits outweigh the risks until it actually happens right like in any presentation of something new happening throughout history or even better example I'm like jumping all over the place because this gets my like wheels turning thinking about it but I'm like what I'm thinking of some event happening in the future that I'm really nervous about I will like worst case scenario what are the thousands of options that could ever happen guarantee it will always be something that I could never have guessed exactly and it's like it's the same thing where doctors used to prescribe cigarettes now we know that they're very bad right it's like the more things are around the more you learn about it and there's things that you just can't know until it is put into practice exactly but fear only holds you back from learning that information I suppose right which I'm speaking of fear I'm fearful of us stopping all the research too early yeah um which unfortunately AI research is tied to a lot of the things that we're fearful of of like at one company becoming the sole owner of the AGI or remote control general intelligence problem is right now our field is because of it it costs so much money to build these systems yes and really high performance computers networking all this and all the crazy stuff we're kind of at a point where we don't have a choice but

what we've seen recently with you know the ongoing of chat GPT right um the big fear was okay open AI is going to have a monopoly on this they're going to be the sole you know company that's going to take over the world yeah but unexpectedly uh the open source community of researchers actually have been building their own models that are competing against it and so we didn't know that was going to happen because we thought that you need hundreds of millions of dollars billions of dollars to do these things but a large group of just people on the web have been able to combat it so which is crazy to think about the power of just human Ingenuity right right exactly so the common is it just shows that humans we will find a way yeah um that doesn't mean that we can't go over too far sure um but there's so many different possibilities that we don't even know so I think the best bet is to try as many as possible and see what paths are bad yeah which ones are good well and I think there shouldn't be anything wrong with conducting research I mean the same way that um there are safety protocols in place for making new vaccines right or for any like biomedical um experiments they're always done in like very safe regulated labs and like there's very safe protocols for disposing of biohazards and stuff it can be very harmful I'm sure speaking of someone who does not know this for a fact but there are plenty of ways that you can protect the research that you're doing like within this computer element from getting out of hand before you're ready for it to I guess that's kind of part of the question mark with AI is yeah if it's self-learning how do you prevent it from taking over even the safety measures but it would seem to me like there would be a way to build in safety measures so that you can do research and figure out okay we need to make sure we build this precaution into any model or product or anything like that and I think the way forward to have the safest way to do research and AI is to have an open system which a lot of people are you know criticizing large companies that are doing their research and then not publishing how they got oh interesting and so the key to a safe development and pursuit of AI is an open source Community that's doing research and to share the ideas and to share like oh I found a solution that can be harmful all right how can we build around it yeah while otherwise if a company finds that solution and holds it to themselves guaranteed someone's going to find that solution and then they're not going to tell anyone about it and it's going to be too late and they may not have the resources to to actually fix it and so just transparency is the clear obvious choice yeah again I tend to agree with that but I think also information is power and information is also money and so yeah it's it goes directly into competition without yeah so on that line of thought another thing that I wanted to make sure that we touched on um because obviously there's the strikes going on right now when we're recording this who knows this will be released in a few weeks so hopefully the strikes have been resolved by then but sagastra or sagaftra and um wga the actors and writers unions in Hollywood primarily um are all on strike because of unfair wages and a really big sticking point in those strikes is why would we pay you when AI can now write Scripts um so I'll just throw that out there as food for thought I don't know if there's a good answer to it obviously I think that you can have writers and computer technology work hand in hand like one does not mean you can't have the other um but I think this is one of the very first things we're seeing in terms of economic change not just scientific change in relationship to Ai and I would love to know your thoughts so yeah there's a lot of worry about AI replacing jobs which right it will and that's kind of unavoidable um I do believe that it will also create new jobs I don't know how that's you know we can't really we had no idea of what the internet would have created but it's true I'm hopeful but on the specific thing for like generative AI like chat CPT for writing or or the journey and all these different image creation AIS for creating movies and everything else um AI is the way it is now is trained on data data comes from us right um and it's all it's great from the Internet it's all the data the internet essentially and so at best these systems can get up to us that's it right and the data on the Internet is never going to be a perfect representation of humans yeah and so that in itself is already an approximation of human nature and so if you have you're building an approximator which is the AI on approximated data you're never going to reach that level it's a copy of creativity yeah exactly and so well we don't know exactly what creativity is either yeah um we've seen this recently with chat GPT and everything it's extremely limited already it'll get better it'll be a lot better but it's limited right what is given and so until we reach Consciousness which I don't think we're close to yeah I don't think we're gonna have like real truly creative AI right so I think we'll always have a need for humans in that sense yeah I think it's interesting too because I've never thought about AI as like the information that it creates being like a duplication of a duplication but it is almost like marrying information to create a new thing but even in that it's like well it's still using things that humans have already created to do that work and so to truly add something new to the conversation of whatever field you're in you do believe that a human touch is needed for that just in your this is totally your opinion at this point yeah I I still fully believe that yeah and also another point is like do humans even want to say like go to a restaurant that's fully run by robots or watch a movie that's fully written by AI like yeah in theory you know it'll personalize to you but we kind of like variety yeah like same things that we're we're fundamentally we like being around humans and so I think people will naturally kind of prefer that's so interesting created stuff because one we are heard pack animals right like we like being around other fellow creatures and I think that there's a lot of strength and diversity and it sounds like when you duplicate a duplication maybe you lose a little bit of the diversity of like thought Ingenuity within that because you're only replicating um not to say that AI like you said won't get better and be able to do certain things in the future um but also I think this element of being challenged is really important and I know for me I'm like that's something I look for in my friendships and relationships and with my family I'm like I want to surround myself with people and things who provide a challenge that make me feel like I'm growing and if something's catered and created just for what you want I would have to imagine that would get old very quickly well I mean even look at everything that people hate watch right like hate watching is one of the number one ways that internet jobs are created right like in AI certainly wouldn't think to create something that people hate right unless they're programmed too which is always the next question but I think that raises some really interesting points in terms of creativity um to follow down this line even further and kind of tie back to our conversation of consciousness I had not thought about this until we were talking earlier but I think a lot of people are like oh this is a very scary thing what does it mean for human beings could a computer ever be considered human at what point does a computer become a human being like all of these very sci-fi George Orwell questions that people start coming up with right do you think that there is also a world in which

those very questions that so many people are afraid of are the thing that help us uncover more information about ourselves in the sense that like the more that we learn how AI is developing Consciousness or getting closer and closer the more we actually understand our fundamental nature and that instead of it being this scary thing it could actually be a huge tool for growth and self-understanding certainly um I would think that as AI as we start developing Ai and figuring out how we can build an intelligence we start to understand how ours works and so with that one could think instead of building two separate things then we're reaching the same point right at one thing at one point and so I think the process of creating is how we can understand ourselves too yeah and creating the flaws also is the thing I mean like I mentioned earlier it's a mirror of ourselves yeah it's the data so if we're we're just building a mirror that we can you know yeah form which it's so funny to me that like that's it seems like time and time again what humans always do is just create something in their image but even if you look back at the Christian Bible it says that God created humans in his own image it's like even in this one um religion so this is just one example but this historic text the being that's more powerful than any other beings is still doing something that's borderline rooted in narcissism to like Center yourself at it um not to say that the judeo-christian God is narcissistic but I think you understand the point I'm trying to make of like human creation always seems to round back to that and it's like well if we're just creating something that is ourselves is there anything to be afraid of except ourselves well I mean a reason why people are worried about building AIS because we're building it based off of ourselves yeah and so how can an imperfect creature build a you know perfect creature yeah I don't know if that's possible um so yeah I mean it goes back to us yeah ai's gonna be bad if we're bad yeah maybe we solve our problem first yeah yeah I won't be the problem I mean I mean it's always been using the problem it's also like this weird chicken and egg situation too where it's like well which one will come first AI helping solve issues or us solving issues so that AI isn't such a big one well it does make me think a little bitter created complex where of course we're in Texas growing up in the Bible Belt I don't know what religious tradition you ascribe to or if you ascribe to one but um you know in any religion there is a Creator story there is a how the world was created and how we were created across the board that is something that is always included and it's always something that's bigger and grander and greater than us that has created us how does that story Not Mirror us creating an AI right it almost creates this like Horton Hears a Who effect where then like you said how do imperfect creatures create a perfect machine well theoretically in a lot of these Creator stories how did a perfect being create imperfect beings

just like shoot yeah which one came first right so is it possible or how do you even Define perfect tests probably the other question great great question is that even possible are we perfect are we not perfect yeah this whole state already I don't know it gets a little meta yeah it does it does but like yeah I mean can you even answer those questions with AI I don't think so because I mean one way to say is we could build a perfect AI if it matches us that's a perfect match theoretically a perfect match yeah to us we don't even know what that would be above us because that's all we know is ourselves so how are we supposed to know what is better the higher being would even look like exactly so that's pretty much all we can do yeah well and then it's like at what point it's like well maybe they're a more intelligent being but does that mean they're better like you get into this question of like what defines goodness yeah even yeah which that's going to get even more complicated once we gotta figure out we gotta answer what is consciousness so yeah I mean it's gonna it's essentially a loop that we've always had yeah philosophy of just why are we here right away and then we'll be asking why is that here and what is that and how does that work yeah so the biggest thing that I'm really taking away from this conversation in terms of the question of Consciousness is AI is just another really profound example of

how human beings tend to stumble into these same questions time and time and time again right it's it's a loop yeah we've always done the same thing yeah as well and the more advanced technology gets

right right and maybe it's because like technology has been changing but are humans really different you know what our Nature's been kind of the same throughout all of history yeah no matter if it had sticks and stones or flying cars yeah we're the human is a human yeah in a lot of ways I totally agree so a couple questions before we round out the podcast um what has studying and researching AI meant for you and how you make meaning in your life so we started the podcast and you talked a lot about how trying to find something that really interests you and then figuring out a way to pursue that was really the big journey of your life now that you have found Ai and you're choosing to pursue it how has that informed that life's journey and what your purpose has become um so I'm a very curious person I've always liked asking questions and that's kind of why I've fallen into the research realm um and I feel like AI is a great area it's like for AI it's like I think it's one of the most important problems that we could ask because it it's the it's the question of who are we as well so what are we what is consciousness and that's always just like hit me was just like I don't know why it's important but it just changes right yeah you know and getting away from Consciousness like on the intelligence side of things like doing things that do other things or building something that can do other things well that's so broad that you could do anything right so am I really I ask myself sometimes am I really just interested in AI or what it can do afterwards so it's almost like opening up many other doors for example like you can use AI for the medical field you can AI for space exploration yeah use it for anything so you could use it to figure out a way to make a pit stop on Formula One car is a whole lot faster maybe I'll go back to the Englishman yeah but uh so it's almost just like a universal key that'll get me it just opens up doors so I don't really have to make decisions yeah you can just stay curious right so I can just do what is constantly you know bugging me in my mind while through the lens of AI I guess I love that I think that that is like kind of ingenious like you hacked the system but you get to just like stay really open and curious about all these things but be like but this is what AI has to say about it so you can continue exploring because I do think so often again going back to our conversation at the beginning of the podcast of having so many paths that people say you have to follow or so many things that are like well this is the well Worn Path like that's that's how you make it happen that's how you are successful but being able to find a path that I mean no one is going to look at someone who is in doctorate school at a very prestigious University who's presenting on something at the Forefront of their area and their field and be like oh okay well you're not successful but at the same time you found a way to do something that I think anyone can admire and appreciate and still make it your own which is I think really powerful and creative in its own right that you have found a way to like do that for yourself I think that's really really cool I I certainly think so because I mean I've had jobs in the past where I just didn't really enjoy what it was and I don't see a lot of point in doing something that you don't really want to enjoy but also just like can you see yourself doing this forever right you know and I understand like that's not possible for everyone unfortunately but um I'm just lucky enough to have you know stumbled upon something that I would love to do for the rest of my life that's really really epic and I'm glad that you found that um okay final two questions to round out the Pod one is there anything we missed anything in light of our conversation that you want to clarify anything that you want to go back to or anything that you're like this is totally random but I need to say it like that recovered most of the big things um we did talk about some of the bad size that I'd like to highlight some of the exciting things yeah 100 please do because as I mentioned like artificial intelligence is just intelligence which could be applied to anything right um the biggest things that I see coming in the next say decade or so just the medical applications are ridiculous and they're growing so fast because we're building these new systems that we have no idea we can do like recently a company called deepmind um solved a long problem we never thought possible which is like folding proteins which I don't quite understand but like um it allows doctors and companies and researchers to actually design specific drugs um to like the individual person so the ability to actually edit stuff like that using AI is pretty crazy um also because it's intelligence you can apply it to say economic problems yeah you could use it for figuring out how to allocate resources or how to best efficiently you know build companies and essentially solving efficiency problems yeah um there's a lot of work focusing on climate as well from a lot of different perspectives whether it's like Material Science of like building more efficient materials or building superconducting materials or whatever all of AI is it's embedded in everything so yeah I think it's going to be growing everywhere it's going to be we're going to see it all the time it's not going to be as big of a word anymore right and it may be like one of those movies that hopefully on the on the good side it'll be a good episode of Black Mirror not a bad one if they ever make one yeah yeah I'd like to see that day I'm really glad you uh pulled Focus to the good parts of AI because 100 I'm like the lives that can be saved the healing that can be done the important work of climate change and all of those great things that if you can have more intelligence focused on it at one time problems get solved faster so that is really really exciting awesome well final question just to kind of wrap our conversation up in a little bow what is one word that describes how you're feeling right now

energize oh I love that word that's such a good one tell me why I don't know it's just like talking about something that I love with a good friend of mine and with a good glass of wine yeah it's just six sick it's excited and that's kind of exciting why I like doing it yeah that's wonderful conversations are great and that's why I love doing this so again thank you thank you thank you so much for donating your time being a wonderful friend and donating a glass of wine to me as well from your wonderful trip to Portugal um yeah thank you thank you so much for having me on here it's my first podcast this is a good experience and it's easily the most enjoyable buy the same category also the least enjoyable but yeah thank you Eli thanks

Previous
Previous

Staying Curious About Meaning with My Mom, Paige Brown

Next
Next

Exploring Childhood Memories and Influential People: An Interview of Host, Reese Brown