Skip to main content
cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 

Episode 8 | Navigating AI in Education: Ethical Use, Guidelines, and Positive Impact

Episode 8 | Navigating AI in Education: Ethical Use, Guidelines, and Positive Impact

InstructureCast (00:08.045)

Welcome to EduCast 3000. It's the most transformative time in the history of education. So join us as we break down the fourth wall and reflect on what's happening. The good, the bad, and even the chaotic. Here's your hosts.

 

Melissa Lobel and Ryan Lufkin. Well, thank you for joining us for another episode of the Educast 3000 podcast. My name is Ryan Lufkin, your co -host. And I'm Melissa Lobel, your other co -host. And today's episode is on navigating AI in education, ethical use, guidelines and positive impact. so joining us today, we have Sidh Oberoi, who, of course, heads all of our partner program here at Instructure, as well as Claire Pike from Anglia Ruskin University.

 

Thank you for joining us, you guys. Great to be here. Wonderful to be here. And so usually we like to start a little bit. If you each give us a little bit of your background, Claire, why don't you kick it off and tell us a little bit about yourself? Great. Thank you. My role is Pro Vice Chancellor for Education Enhancement at Anglia Ruskin University. And within that, I have involvement in our overall education strategy, serving our students toward the best possible outcomes for their futures.

 

Claire, would you mind sharing just a little bit about how you ended up in that role and sort of your journey into Anglia Ruskin? Yes, certainly. My disciplinary background as an academic is molecular biology. So I did an undergraduate degree, a PhD and postdoctoral research position in that particular form of science. And during the process of that journey, I had the opportunity to be involved in university level teaching.

 

first in small group teaching, then lecturing, then designing courses and providing pastoral support to students. And I found that I was equally inspired by the research and education aspects of the academic world. And I also found that I found increasingly interesting to consider how transferable different principles of course design and academic support are between different subjects.

 

InstructureCast (02:18.1)

So I led a course in biomedical science and then I led a group of courses. led education across the school, then faculty. And now my role is university -wide, leading those strategic forms of education across all our different subject areas, which range from business and law to music, arts, drama, science and engineering, and large provision in health and social care. fantastic.

 

I love that. Sid, give us a little bit of yours. Of course. What a lovely background, Claire. That's super fascinating. I am also a science nerd by heart and trade. I kind of have a fun little journey as well. So I've been in structure for about seven and a half years. I live in London. I started out and product and structure for my first five years, then spent two years in strategy. And this year, we simply have been focused on our partner program where we have.

 

over a thousand partners that work with Instructure and integrate into Canvas in a variety of different ways. However, immediately before Instructure, I had my own startup. It was an after -school learning center for kids in grades K through eight, teaching them STEM, science, technology, engineering, and math. We taught them everything from computer programming, 3D printing, to mathematics, to fashion design using technology. And we actually had 14 different learning centers in nine different US states.

 

And where that started off was I was actually in process of applying to medical school in the US. However, I was very young for my age. I was 19 years old and I was at the process of jumping into medical school, which Ryan and Melissa probably know is quite young to jump into medical school in the US where the average matriculation age is around 25 or at least was 15 or so years ago. So I thought what better sidetrack or what better thing to look

 

good on a medical school resume than having a startup endeavor that was focused on teaching kids how to learn the STEM disciplines in particular, recognizing a gap of individuals that were pursuing those types of careers. So that was a really fun endeavor. It taught me a whole host of things about product development, business orientation.

 

InstructureCast (04:25.366)

as well as diving really deeply into the inseams of education and technology. And when those two things collide, how they can really deliver value to learners across the globe. love that. And Ryan and I, we've worked with you for a long time, Sid, and I didn't know some of that backstory. So I love, love hearing that. And it leads to my next question. And I'll start with you, Sid. We always love on our podcast to ask our guests their favorite learning moment.

 

something they remember in their past from either them as a learner or a student or as a teacher or watching some sort of learning moment happen. And given some of the background that you just shared, I'm sure you have at least one. Would you mind sharing that with us? Yeah, a plethora of moments there, but one that just discreetly comes to mind is at my startup, we did game -based learning.

 

So we taught a lot of people how to learn math, physics, and science through the video game Minecraft, which some of you may be familiar with, which is like a block -based learning or create your own world modality. And it was so fascinating to work with these second and third graders, teaching them basic principles of physics and teaching them kinematic equations inside of the confines of Minecraft and seeing these like aha moments.

 

with these seven and eight year olds that are learning the laws of motion was just truly remarkable. And it really helps you take a step back and recognize that the power that learning can really bestow on humans when they're interested in the subject matter, right? Or when it is presented to them in an alternative format versus, you know, a traditional textbook, which all of us probably spent our academic journey learning through. Love that. Love that.

 

those inspiring moments, I've had a few of those myself over my learning journey and teaching journey. Claire, how about you? Would you mind sharing a favourite learning moment, like I said, as you as a learner or as a teacher or observing it? Absolutely. I think one of the absolute privileges of being a lecturer and a teacher is seeing the aha moment on students' faces when something perhaps they had conceptually struggled with previously clicks for them.

 

InstructureCast (06:43.616)

in one way or another. And it is my privilege to have seen that multiple times with many students throughout my career in various different disciplinary subjects. I think bringing together the combination of different aha moments, one looks at the distance travelled by students. And I say this particularly from the context of my institution, which is a proud widening participation university that has a student intake.

 

comprising many people who would never have thought that university was for them. And yet they have found a path and found support and found a future life empowered by education that is beyond where they initially thought their possibilities would lie. So I suppose I would perhaps distill those learning moments into something which is very topical this week because we've been having our graduation ceremonies and

 

Seeing both individuals but also the collective student body receiving their degrees, thinking about all of the learning moments that have contributed to that and the huge collective achievement within that room, particularly of people who never thought they would get there, is just so happy, so humbling and wonderful to be part of. That's amazing.

 

That energy, I can feel it, right? I can imagine that. That's great. It's kind of one of those reasons we all come back to education, right? We stay in education is that impact we have on those lives. And that's kind of the culmination of those efforts. But diving into our topic now, one of the things we talk a lot about AI in the podcast in education right now, you kind of can't avoid talking about AI a lot. But we are really focused in on our topic today, which is that ethical use of AI in the classroom. think, Claire, the reason we're

 

we're chatting is because I think your background is so unique and said you were with so many of our partners who are tackling this right now. But Claire, give us your thoughts on that ethical usage of AI in the classroom specifically. It's something that we as a university have rightly spent a lot of time thinking about recently. And we indeed this academic year have formed an ethical use of AI policy, which has been approved by our senior committees, our university senate. So it's something which

 

InstructureCast (09:02.712)

has rightly received a lot of academic attention and is close to the top of people's minds at the moment. I think the crux for me is striking the balance between enabling students to be equipped for a world of work in which AI will be a necessary graduate skill. In my view, it would be wrong to simply ban it and pretend it's not there because that would be disadvantaging our students when they enter

 

workplace. However, we also need to make sure that when we are awarding academic credit and qualifications to students for skills and competencies, that that certification suggests they have, that it's true. They genuinely do have those skills and competencies. And therefore, I think our educational policy work is trying intelligently

 

to delineate those things and to tread the line between how we do need assessment tools and mechanisms to ensure that we are confident that the student is capable of doing things we're certifying them for, but at the same time that they feel enabled and confident going into an AI rich world, which is where we are and will become increasingly so in the future. Yeah, I love that.

 

From a guideline standpoint, let's dive in a little bit into that, dig in that a little bit. But what are some of the examples of acceptable use within the classroom and then unacceptable? How do you define that, at least with the policy for your university? Absolutely. I think it's perhaps important to delineate between what students might do formatively in terms of

 

engaging in their learning as a learning journey, and then what is submitted as a piece of coursework or an assessment type. Perhaps taking the latter category first, because it is the area that's attracted the most academic attention, we have the clear policy that if a student uses material that is generated by AI, whether that be text, music, images, video, it is properly referenced technology as such.

 

InstructureCast (11:20.544)

we would expect that transparency of quotation and showing how the AI engine created a particular output with the student's contribution as the prompt author to be articulated clearly within their submission. And then, of course, the flip side to that for a university is how to handle cases where it is not clearly articulated. And we are not reinventing the wheel from scratch there.

 

we have existing policy, for example, people quoting from a paper or a textbook, that we would expect that to be transparently put in quotation marks and acknowledged in terms of the source. So we are using a similar approach that if something is AI generated and the student is using it, then they should acknowledge it as such. Now, that, of course, will be used differently by different students depending on their strength and the level of their critical analysis.

 

whereby a weak student, an ethical but weak student, might simply insert the AI generated output, reference it and leave it there. At which point they haven't done anything wrong, they haven't done anything unethical, but they are unlikely to be able to attract very much credit for what they've done because there's not very much evidence of their own work having been brought to the party there. However,

 

A very good student might use an AI attributed starting point and then go on and build on that and critique it and demonstrate all of the ways in which they are adding personal further value from that point onwards. And indeed, some of our innovative staff have begun setting assessments that explicitly ask the students to do that, whereby their assessment brief.

 

is get an AI engine to generate a first draft of the following question. That's not particularly marked, but then the part that the student is marked on is their own critical analysis markup and improvement of the AI output. Perhaps drawing on the oft -used adage that an AI engine is like having a very enthusiastic intern.

 

InstructureCast (13:40.28)

who works very hard but sometimes gets it wrong. I think that's actually a great analogy. love that. I love that. And by showing those higher level skills, you're taking the intern's work and you're saying, well, yeah, you've done a great job on this bit, but let me show you how actually this particular part of the argument wasn't quite right and we need to improve here, which is exactly that higher level of graduate skill that we want our students to develop and be able to demonstrate. So I think there are very good creative ways.

 

that we can design assessments to make it clear from a regulatory perspective that AI must be referenced and attributed where it's used, but also that we can engage the students in that higher level skill development using the AI output as grist of the mill that they then improve upon as part of their own demonstration of practice. So that is the assessed student part in a nutshell. You also asked me about in the classroom. I think there many of the same

 

principles and skills apply, but obviously in an active learning environment, it can be a more fluid iterative process whereby students altogether, say if they're working in art, might improve upon different AI outputs and then discuss among them perhaps the limits of technology there, where those limits are marching all the time. I'm sure that

 

image generation is better now than it was even three months ago and the rate of technology development is incredible. The way you've thought about that is so parallel just as you described to how we've thought about having students use references or materials throughout their learning experience and I'm already thinking not even just about how students are demonstrating the work they're doing and then reflecting and enhancing that.

 

But I'm already thinking about the rubrics that we can use for marking those students and how do we incorporate, how do we very clearly incorporate those elements of how they will be assessed in how we are very evidentially marking students, which is, that's an interesting thing. I talk a lot about on this podcast how I teach and I'm already thinking about my rubric because I have students use, I want them explicitly to experiment with AI tools, how I will build that in. So it's very evident that I want to see them

 

InstructureCast (15:59.83)

Reflect use enhance incorporate the specifically the work coming out of the AI tools I just I love that and mark Brumwell who was on our broadcast last time from Oxford You know he and his line was using general AI isn't the issue misrepresenting your work is the issue So really finding ways to document them and supporting students and documenting that is incredibly important I would love to take that point in particular Ryan said you work with

 

hundreds of vendors in your current role. You've seen it all. And I suspect hundreds and hundreds more are calling you saying, hey, I have this great AI tool. How are you seeing technology out there balancing this same ethical versus unethical use or helping particularly education organizations feel comfortable that those tools leveraging AI can be used in ethical ways? Yeah, I think that's a great question. But firstly, I do want to apply.

 

Claire and the team at ARU for their methodical approach to understanding the utilization across your institution. I think that's phenomenal. I think Ryan, you, Melissa and I spent a lot of time last year or the last two years speaking to institutions across the globe trying to understand the dynamics of how their institutions were using AI, both from their faculty perspective as well as from the student perspective, right? And that kind of...

 

dictates the capabilities of providing guidance. And it sounds like ARU has really taken tutelage in terms of driving a path forward that could be well represented for their students. And that's what we hear a lot on the partner and vendor side, right? Is they want to help institutions understand the dimensional analysis of how AI is being deployed inside of their institution. And even we have some partners that are helping institutions.

 

craft their AI usage policies through the utilization of data, right? Trying to identify that on a department by department basis, there will be different standards associated with what is and is not acceptable, right? The College of Law may have a very different threshold than the College of Medicine, which may have a very different threshold than, you know, your sociology department. So having the capabilities to understand where students, where faculty, where...

 

InstructureCast (18:11.936)

administrative staff are helping or are utilizing that helps dictates the academic misconduct policies that ultimately ensue. Separately, it's trying to ensure the biggest question that I get when I speak to both institutions as well as partners is the privacy components associated, sorry if this is a later question, but the privacy components associated with AI in general, right? Trying to ensure that you have that walled garden because when, you know,

 

chat GPT landed on everyone's desk a few years ago and through the lens of open AI, you had a plethora of individuals uploading institutional IP as both from a company perspective, from a university perspective, as well as from a student perspective uploading, you know, PII, which is all problematic at the end of the day. So.

 

that we have a lot of partners that are helping educate institutions in terms of best practices. But as they deploy their own AI tools inside VLEs and learning management systems, it's critical and paramount that they think about what are those ethical implications as it pertains to IP in particular, right? How are you giving away trade secrets that may delineate you, other companies from other institutions, from other higher education?

 

colleagues that is ultimately problematic, right? So what we're continuously stressing is helping put up that walled garden approach to ensure that you have a closed ecosystem that is a safe space for students to feel like they can utilize the systems, for faculty to feel like they can utilize the systems, which then leads into, you know, a different ethical analysis of what academic misconduct can and or should be at the end of the day.

 

One of the things that I think is so challenging is this is kind of an onion and we peel back these layers, we see even more layers underneath, right? And, Claire, I think you brought up a point to where even the different generative AI output tools, whether it's text, whether it's images, whether it's video, whether it's coding, bring different challenges. so as you start looking at establishing guidelines, who do you bring to the table? Who owns, who owns this? And how did you decide, you know, what are you like?

 

InstructureCast (20:28.802)

Who owns this? Who's driving this? Who do we bring to the table? In terms of our policy, the work is being led by our central specialist unit called Anglia Learning and Teaching, who are educational developers working to support education policy and practice across the whole institution. Having said that though, the working group that they've led have comprised colleagues from all different subject areas.

 

So we have central leadership with consultation and input widely across the university, because indeed it takes many different perspectives to work on this. I have additional question. You mentioned earlier too that this went to Senate, your faculty Senate, and that this was endorsed throughout the university in a lot of ways. I'm sure there were people that were concerned and healthy debates around this. I think there's institutions that are putting together guidelines around the world.

 

and are concerned about faculty reaction to those guidelines, or are concerned that faculty don't have the understanding or knowledge needed to really make the right choices or to understand the debate. How you managed ensuring that across your faculty they understand AI enough, or they've explored AI, or they can...

 

We have the indicators and the laggards all together, guess. Exactly. So how did you manage that? Because I know there's going to be listeners thinking, we've got this great policy. We put this together. We're so excited about this. but this has got to go to Faculty Senate. How do we work through that? It's a good question. And I think you're right that there is, of course, a bell curve of people's level of interest, understanding, and engagement with new technologies and indeed new policy.

 

I suppose in terms of managing that on a practical level, easy to say, but nevertheless super important, communication is key. And although of course AI is a fast moving area, it is not out of kilter with the normal business of universities to need to update policy, practice, and our ways of conducting education. So therefore we do have dissemination communication mechanisms that are well underway.

 

InstructureCast (22:44.81)

and will respond, for example, to differences in government policy about all aspects of education. Visa regulation, for example, is something quite unrelated, but nevertheless changes and we need to adapt to. These mechanisms of dissemination were perhaps used most intensively during the COVID -19 pandemic when the extent to which we as a university were permitted to have

 

teaching on campus or online was changing almost weekly. So I suppose in some ways it's a case of working out which tried and tested mechanisms for communicating within and among a large organization are relevant. And we're not reinventing those from scratch. The subject matter of AI is new, but I don't think we should bamboozle ourselves into thinking that therefore absolutely everything about the way we handle the situation has to be invented from scratch.

 

we do have ways of communicating with our staff and over time upskilling people about things. That is an aha moment. Right, Ryan? I mean, that's an aha moment, I think, for a lot of our listeners. Yeah, spot on. I just think the pace of change has turned up, but we do have those mechanisms. I think that really is build on the existing procedures and policies and we just have to do it faster.

 

Yeah, maybe COVID was a good dry run for what we're experiencing now. Yeah, and we do need to do it quickly, but I think I would contend that the pace of change for an HCI during COVID was faster even than the advent of AI. I'm not saying it's easier, we are resilient to organizations who have coped with change before. Yeah, I love that. That's so right. You know, I'm thinking too about this. There's a theme so far to our conversation around while AI

 

There's a lot of really great opportunity through AI. AI isn't a destination in and of itself. And I think about this from a product perspective. So Sid, I'm going to throw this your direction. I think there's camps that are like, what's, you know, I still hear this from institutions. What AI tools do you have? What were you doing? What's your AI product? And then there are organizations and products in particular that understand it's really not about that product destination. It's about how do you use AI most effectively.

 

InstructureCast (25:04.684)

How have you seen that with all these vendors you're talking to? Do you have any good examples of how to rationalize don't do AI for the sake of doing AI with doing AI for a meaningful outcome? Yeah, I feel very similar to like the conversation we used to have around data and analytics a few years ago, right? Everyone was like, I want data and analytics. I want data and analytics. I want data and analytics. But there was no intention of what you were going to do with it, right? So we wanted to think about like, what is actionable? Similarly, when we think about the deployment of

 

AI, it's what use case are you trying to solve? And I'm very excited about the approach that we've taken in structure and what I've seen a lot of other organizations do is how can it help improve teacher efficiency, teacher efficacy, and personalized student learning? But the other dimension is, you know, thinking through the lens of equitability, right? And trying to meet learners where they're at, which I think is a phenomenal thing and something that's near and dear to my heart is trying to create tooling.

 

that is going to bridge the gap in learning attainment that exists so prominently throughout the globe, right? And I think, you know, Claire, you touched upon this a little bit earlier around thinking, helping students that have never imagined themselves in university, putting themselves into that place at ARU. And I think there is the ability for more of that to occur via the proxy of AI, right? Where learning tools can become more accessible.

 

And, you know, accessibility is such a broad term. can think about it through the lens of meeting learners where they're at to bridge that learning attainment gap, but also through the lens of those with different learning abilities and learning challenges that have to navigate either hearing impairment or visual impairment and making sure that there is alt text on every image or making sure that there is voiceover functionality on a page and whatever their professor is uploading has all of those situations taken care of.

 

AI can automate those things, right? It can tag, identify, and insert those things on behalf of instructional designers, on behalf of faculty, which streamlines the process, right? And goes back to that notion of how do we solve for things like instructor efficiency because as resilient as

 

InstructureCast (27:15.838)

HCI's and higher education institutions are faculty have a lot on their plate at the end of the day that they need to solve for. So as much as we can aid and embed them through the utilization of technology to, you know, solve for some of those gaps as a precursor or a pre -check into a lot of the, you know, I don't want to call it monotonous work because it is really powerful and meaningful, but making sure that those check boxes are ticked.

 

to ensure all of their learners have the capability to learn and it's helping meet them where they're at. I think that is very paramount to any conversation that we can and should have with AI. So that's where I love to see the use cases for AI solving real challenges that are so critical to teaching and learning across the globe. And we see a lot of our partners doing that, which is phenomenal. And we've talked a little bit about the fact that

 

you move beyond that accessibility, it really is the personalized learning. I think we're just scratching the surface of how AI can really personalize the learning journeys for students. So Claire, how do you balance this idea that we're going to, at some point in the future, we'll be able to open up these really personalized pathways versus, you know, we need real good guidelines in place to make sure we're not overstepping with AI. How do we balance that kind of innovation for the future with regulation for today? I think, again, I would draw a distinction between AI as a way that students learn as a

 

tool and a way of improving their learning and the issue of authenticity of assessment because they really are quite different things. We know there exists and are now very widely used tools to summarise text, for example, whereby in order to digest a very long and complex chapter, students are quite routinely using an AI summarise function in order to draw the key points out of that paper or chapter.

 

And when used in an ethical way, I think that's a brilliant starting point in that then the student will be able to read the full text with probably a greater understanding of the author's intentions and they will get more from that read through having read the AI generated summary first. So I think there are lots of ways in which students can take control of their own learning using AI as a helper, directing the AI to

 

InstructureCast (29:32.95)

summarize, condense, search, and present back to them synthesized results which will enable their own learning. And all of that is part of a personalized, individualized learning experience that the student themselves are in charge of. And I think that that's very important and powerful. And as you say, Sid, that that can also be part and parcel of the widening participation mission, I think, in terms of people having tools at their disposal that help them to learn in a way that is useful to themselves.

 

That, I think, as I say, is an entirely different category to the idea of authenticity of submitted work. And quite a lot of the narrative around use of AI in education has, for really good reasons, focused on the latter category of how do we know that what the students have submitted, that it was done by them. And I'm not belittling that we absolutely need to take that seriously. But at the same time, when we step away from the graded workspace,

 

There is so much possibility for the student to use AI productively in order to personalize their learning journey and their ways of studying in ways that are productive for that individual student. love that. I think we're all hankering for that moment, Claire. Hopefully we can get there in the near term. Well, I think that sort of begs one of our last questions around the future.

 

What does this look like in three to five years or what do we hope this looks like in three to five years? don't know. Sid, do you want to maybe start your thoughts of what would you like to see AI enabling in learning in the next three to five years? I think a lot of it builds off of Claire's last statement, Moving past this notion of drawing down the lines of academic misconduct and elevating our conversation around AI around

 

more of these cases and problems that it can solve, right? I would love to see it more deployed. You know, I'll nerd out a bit again, coming from a science and being an avid climate advocate, right? Is thinking about how we can utilize AI in the sense of solving real world challenges, right? Not just educational challenges, but as we see...

 

InstructureCast (31:41.578)

It introduced to be able to digest large data models and simulate different scenarios associated with some elements of green technology. think that becomes incredibly exciting for ways that we can approach solving the different global problems that we as citizens of the world are ultimately confronted with. Right. And I think that starts with deploying it inside. Well, I hope it starts with deploying it inside of education. So that inspires the next generation of the workforce to put.

 

that learning into good use and to identify how they can take it into the workforce and really deploy it to figure out where there can be true innovation, right? Because that's the potential that we need to tap into is the innovative nature of what this technology can and should provide.

 

to meet society where it needs at Tic .io rather than the doom and gloom conversations that we often get fraught into is understanding, well, man, what can carbon sequestration look like in the future? How can we empower that? What are more efficient ways to produce the different technology that is necessary in order for us to get there? And I think that's where in post -graduate and industry, et cetera, et cetera, things get really exciting.

 

and where I would love to see society go in the next three to five years. Very inspiring. Claire, your thoughts about the future. Thank you. I think there are already fields where AI is improving efficiency and sometimes quality of our operations. I mentioned before that my own background is in molecular biology and biomedical science. One important aspect is laboratory diagnosis, where, for example, biopsies or...

 

cell samples are taken from patients and traditionally highly skilled trained pathologists will examine those under the microscope and make a diagnosis based upon their findings. This is essentially a highly skilled pattern recognition task for which one can train AI to a high level. It will always need human expert skill and verification, but I think that we could, for example, improve

 

InstructureCast (33:56.142)

times to diagnosis within the NHS and other health services by employing AI at the right point to do a first pass checking all of these samples and making recommendations for diagnosis. That is just one example of work that is already, I would say, on the cusp of implementable. And I'm pretty sure that every sector will have its own examples and stories. So therefore, as educators,

 

I think it's important for us to be thinking about teaching students how to use AI in their future careers. 100%. I love this. Yeah. So for trainee pathologists right now, yes, they do need to understand how to do this manually, as people have done for a long time with their high degree of training. But I think we ought to be teaching them now how AI will be their helper in this and how an AI trained model, if you like, can give them

 

a more efficient work flow such that they then apply their skills at the end of the AI as an intern having produced a first draft of the results. And we need in a university space to be educating our students on that collaboration with AI that they can then take into the workplace. love that. We talk a lot about AI literacy and that ethical usage guidelines, all of these. We're training the future right now, right? These students need this more than ever.

 

And I've already talked about, you know, my son that's 13 years old and they're not receiving this kind of ethical training around AI. There's not a lot of focus on AI literacy at the primary school level. So it really isn't coming upon universities to take the bull by the horns there, I think. Yeah. And I think it's a cross -discipline too, right? It's in every discipline. There should be a, how do you apply AI to your discipline? Part of the curriculum. That's what I'm pulling from this, which I can think about so many disciplines where that doesn't exist.

 

I'm a social studies by trade, so political science and history. And there's so many good places to apply it, but I'm pretty sure that those disciplines haven't thought meaningfully yet about how do we, or if they have it in pockets, which is amazing, how do we do this and how do we help students understand how they were going to use AI to better their field, to progress their field, to innovate in their fields? Just so, so awesome.

 

InstructureCast (36:18.102)

I think this leads us to our last question, which is, what didn't we talk about on this podcast that we should be as we're thinking about AI? I have a suggestion. And this is blue skies. It's not a kind of current practical issue. It's about the further future and the extent to which AI is ready to take on human activity, shall we say. And I think the area where I feel

 

the technology has not yet developed anywhere near where it needs to is the robotics linked to AI. So for example, AI could predict how we should grow crops in real large outdoor fields, but we do not yet have the robotic sophistication to act upon that AI model. That is still very manual. And one could apply that in a more domestic environment to say cleaning and housekeeping.

 

that sure, an AI model will tell you how to keep your house beautiful, but we don't really have sophisticated robots who will execute it for you. And I think we're at an interesting point here where we are possibly automating white -collar jobs, but not so many blue -collar jobs. Yeah. And feel like our fear around this fear that's been cultivated around AI, in a lot of cases, has to do with those, you know, robots turning on us in our homes and things like that. that's...

 

fear is part of what's, what's impacting that. Well, I think many people would be really pleased with the future whereby their housework was done by AI and they personally got more time to spend creating art and music and reading interesting literature and you like and, and slightly oddly AI is taking us in the opposite direction whereby it is beginning to create art and music and literature. But wouldn't we rather it did the housework?

 

That's just kind of a philosophical point to end on in terms of our direction of travel. love that. I love that as well. I think that's a great point and kind of harkens back to some of earlier conversation, right, is what are the use cases you're thinking through of trying to solve and really trying to identify what is the power that can be manifested in the realms of the technology and

 

InstructureCast (38:33.006)

It's evolved so quickly in the last two years from, you know, GPT -3 to 4 to 4 .0 and all the iterations and in between. And I think it is really trying to identify what is the point? What is the rationalization for what we wanted to do? like it is more of a meta question of.

 

How do we envision ourselves interacting and engaging with it within the future versus, you know, what it does for us in the immediate term. And I think that would unveil a lot of net new possibilities and drive us down a fun chat, if I will. A more fun and productive pathway. Yeah. Well, thank you both so much for your time. Our time has flown by on this podcast. It's always an amazing conversation. Claire, thank you so much for your input on this. think your background is fascinating.

 

brings a really interesting perspective to this conversation, Sid, yours as well. So yeah, hopefully we'll revisit this again in the future and see if any of our conversation has come to pass.

 

Thanks for listening to this episode of EDUCAST 3000. Don't forget to like, subscribe, and drop us a review on your favorite podcast player so you don't miss an episode. If you have a topic you'd like us to explore more, please email us at InstructureCast at Instructure .com, or you can drop us a line on any of the socials. You can find more contact info in the show notes. Thanks for listening, and we'll catch you on the next episode of EDUCAST 3000.



Labels (1)
Was this article helpful? Yes No
Embed this guide in your Canvas course:

Note: You can only embed guides in Canvas courses. Embedding on other sites is not supported.