Podcast
    Threat Intelligence

    Get Cyber Resilient Ep 128 | AI and Cyber 101 with David Higgins, former CISO for Kiwibank

    On this week's episode Gar talks with David Higgins, former CISO for Kiwibank.

    CR_podcast_David_Higgins.png

    In this conversation, David takes us through what AI and ChatGPT mean for cyber, providing a clear understanding of what it is and what it isn’t. He also provides insights into what it means for both the attackers and protectors, as well as what is hype, what is real and where does it lead us. To wrap the episode, we cover a topic that is very important to David, people.

    The Get Cyber Resilient Show Episode #128 Transcript

    Gar O'Hara: Welcome to the Get Cyber Resilient podcast, I'm Gar O'Hara. Today we are joined by David Higgins, who's held many leadership roles in cybersecurity in banking and finance, and who many in New Zealand will know as the recent CISO for Kiwibank. David is leading by example and taking a break from cyber right now but kindly joined for a conversation on what AI and Chat GBT means for cyber. So what it is, what it isn't, what it means for the attackers and what it means for the protectors and what is real and where does it lead us. And, we talk about a topic that's close to David's heart and something he covered so well in his talk at CyberSecOn last year in New Zealand. People, over to the conversation. Welcome to the Get Cyber Resilient podcast, I'm Gar O'Hara. Today we are joined by David Higgins, cybersecurity leader and very much lauded conference speaker. Saw you over at CyberSecOn and we may get to the- the kilts, maybe we won't, who knows? But really great to have you here today David.

    David Higgins: Thank you Gar, it's really good to be here. A that-... no promises on the kilt though. I'll say that to start. [laughs]

    Gar O'Hara: [inaudible 00:01:17] audio only, so maybe that saved us.

    David Higgins: Well, I- I don't know. I mean, if people are using their imaginations I'm not sure whether I should be flattered or scared.

    Gar O'Hara: [laughs]

    David Higgins: But yeah, no, thank you for your warm introduction. I guess, you know, I'm just gonna jump in and introduce myself to- to the audience here. You know yeah, no, I-... Right now I'm not a break. And I think that's- that's an important thing to call out. You know, I'm a real big believer in in personal wellbeing and and mental health, particularly in the information and cybersecurity industry. It's something that we probably don't focus on enough. And you know, happy to be able to have a bit of a conversation with you about that today.

    If I think about, you know, my career I'd probably go back to the late '90s. So I started my work in 1887 in in the United Kingdom, in England working for financial services organizations. My first job was putting desktops on people's desks. And it was sort of going from a paper based workforce to a digitally enabled workforce. So... you know, that was that was really sort of that- that crossover between you know, the analog and the digital worlds. And obviously having embraced that, seeing the digital world just become the all-encompassing behemoth that it is today, and strategic enabler of so much value, everything that we expect in our lives.

    Probably got into cybersecurity about 2007 specifically, and that was working for a pharmaceutical company in the UK. And, they didn't just need systems admin but they [inaudible 00:02:50] UK and Ireland IT and information security representative, and I- I sort of, I wouldn't say I stumbled into it, but I put my hand up and they were like "Sure, on you go." and then and then I came to New Zealand in 2009, originally for a one year holiday, but really I've been here, you know, pretty much ever since. And and as the information security sort of I guess the industry started to to develop and shape in this part of the world, you know, I really sort of embraced that side of my experience and tried to build upon that.

    Starting as a sort of a security operations and technical specialist and then moving into your sort of your consultation, advisory roles and then into information security leadership, which ultimately led me to sort of the vice president role that I held at Jarden as their Head of Security and Digital Risk and then ultimately as the Chief Information Security Officer and head of Digital and Tech risk at Kiwibank. So, you know, probably been in the game for a wee while but taking a strategic pause.

    Gar O'Hara: And this sounds like much-needed strategic pause. Like, you-... one of- one of the things that I think is a hot topic at the moment and- and an interesting time maybe to take a break from cyber just given what's going on. With-... B-... We're actually gonna end up talking about both ends of the spectrum, I will definitely get to the human side of things, that's clearly a passion topic for you. But right at the other end of the spectrum is kinda tech and, you know, the machines, automation, AI. So maybe it's great to start there and kinda get your take on the- the big news story for probably the last two or three months, which is Chat GBT. And awesome to just start with your take on- on what it is, 'cause I think there's a lot of hype around there and a lot of stuff being said. So what it is and then what it isn't, maybe through the lens of cyber.

    David Higgins: I think so for me, and that's a fantastic question you know, like... for me you know, the- the-... you're absolutely right, the value of automation and the value of tech is- is not to be underestimated. Chat GBT I think is is, what it is not, is- is- is not the AI apocalypse coming to get us.

    Gar O'Hara: What?

    David Higgins: To- to-... yeah, I know, right?

    Gar O'Hara: [laughs]

    David Higgins: And it's- it's-... I feel like everyone seems to think it is. But Chat GBT is, for me, you know, it's- it's- it's a way of, enabling and allowing people to have more fluid conversational interactions without necessarily having a grasp of a language itself. It's a large lear-... large language learning model. You know? And- and so, you can fire questions at it and it will look at a set of data and it will try and answer you in a conversational way, but it can also fit into other sort of, you know, AI and sort of slightly more general purpose-y type things to provide you with better information sets around, like, you know, potential [inaudible 00:05:48] queries and things like that.

    And and I think it was Check Point Research that actually published something start of this year, late last year indicating that people were able to use Chat GBT as a way of creating, you know sort of phishing email campaigns. And- and allowing people to gain access to, I'm gonna say language that might be outside of their normal capabilities, you know? 'Cause obviously there's a lot of people that are writing phishing campaigns and one of the major things that we look for when we are on the defending side is, you know, for grammar, for syntax, obvious spelling mistakes and things like that. You're really, in my view, you're giving the capability to omit those kind of-... those default gotchas from future campaigns. And I think that's going to make it a really interesting environment to protect against once you sort of-... once you remove those those common denominators from the- from the threat pool.

    Gar O'Hara: On that, is, do you foresee a future, so, like all of those- those kind of attributes that you look out for on the defense side, one of the things is also style. And we talked about this probably in BC specifically where one of the mistakes that gets made is the person kind of pretending to be David Higgins uses maybe a phrase that they don't normally use or signs off in a way that they don't normally use. But with Ch-... with Chat GBT and AI, I mean, assuming and with Chat GBT, right, you can ask it to write in the style of X, Y, Z. So take it to the point where I could say, I wanna write a phishing email a- a very targeted spear phishing email in the style of David Higgins asking for an account change for a, you know, a particular payment. Do we get to that point, do you think?

    David Higgins: I suppose it depends on how much of yourself you're putting on the internet [inaudible 00:07:49] 'cause Chat GBT's only going to be able to-... or any sort of language model is only going to be able to reference as much information about you as- as is available. Do you know, it-... I-... of course a lot of it's gonna depend on the type of data that gets stolen as well, you know? If, you may find that actually things that previously were not massively high value may become more valuable to attackers. You know, rather than looking at-... just for things like, you know, card data, user information, we may find people actually pulling out troves of, you know, email communications as a specific target because that will inform them on how to create something in the style of a given person. Because, you know, we all communicate digitally in a similar style to how we would verbally.

    Now, you know and I think that, you know it's really interesting actually 'cause I think that Chat GBT is getting a lot of focus right now. But one of the other things that I thought was really, really interesting was in some of Microsoft's work that they've been doing recently. They were able to recently say that they can effectively recreate the voice of anyone with about three to four seconds worth of audio capture. You know? And that's, and, you know, and- and- and unfortunately I don't have a source to hand, but definitely they've been doing a lot of work in that space. And so, if you think about it in the financial services industry, one of the things that is relied upon is biometric voice printing for identity au- authentication. And, you know as part of the verification of who you are as an individual. If you can convincingly, grab that for someone with three or four seconds worth of audio recording then, you know, there's a lot of potential for that to be, you know, to be, to be abused as well.

    So it's, I think, what we're sort of getting towards is the, the true ability to verify identity via digital means is going to be further eroded. And of course, we've got this bit where as a culture, as- as humans, we are-... we've trained ourselves over the last 20 to 25 years of digital uplift to actually be quite trusting of these platforms, you know? If it sounds like- if it sounds like a duck, walks like a duck and talks like a duck, it's probably a duck. Unless it's an AI. [laughs]

    Gar O'Hara: An a-... AI ducks, it's a dystopian future [inaudible 00:10:14].

    David Higgins: Well, it depends if you like ducks.

    Gar O'Hara: Yeah. Is that a one 100 foot duck or 100 one foot ducks? Which-

    David Higgins: Hmm.

    Gar O'Hara: ... yeah, which AI duck do you-

    David Higgins: I don't know. I don't think I'd wanna take on either of those.

    Gar O'Hara: Yeah, definitely not. Yeah, look, in reality and maybe I missed this stuff along the way, but I don't remember anything, you know, sorta landing with the thud that Chat GBT has in terms of it's... you know, like we kinda said it at the start, right? It's, you know, a bit of a-... it's not the apocalypse potentially. But wh- what do you think there-... yeah, like, what do you- what do you think that- the very kinda real and present dangers are for security teams? And you kind of alluded to obviously phishing as one of them. But like... yeah, what else is going on there?

    David Higgins: Well, I- I think, you know, you- you've also, I think that there's- there's a lot of there's a lot of potential for you know, for this capability to be abused, for, you know, writing new malware code. And, you know it's-... and if we look at some of the traditional you know, protective layers that organizations would have in place you know, if- if- if the opportunity is there for somebody to with minimal additional effort make a num-... make a number of queries and iterate- iterate their code to the point where it hits a different, you know, a- a different sort of signature and it's tagged as new and it can be, you know, it can be implemented into other ways of moving through perimeters and edge layers and things like that, then I think that, you know, that's definitely one of the things that I would that I would expect to- to see evolve from the attacker's side.

    You know, it-... for me it will be- it will be the idea that- that people who have historically had to consume sort of malware as a service and don't necessarily have the best grasp of delivering a a phishing campaign, you know, I kind of describe it as the background noise of the internet. These kind of-... these low effort, high volume type pieces. I- I- I think that, you know, we could see a scenario where actually that- that bar for quality that we kinda try and move to before we kind of have human intervention, I think that's gotta come up. And that means that we're going to have to actually spend, you know, more time refining and iterating our defensive layers from the- the defender's perspective.

    But also, that I- I think, you know, we- we might also increase, you know, the the analyst overhead as well, which is something you've always got to be very cognizant of when you are putting together a security team. You know- you know, when you're running a defensive capability you know, your analyst hours are some of the most valuable hours you've got when it comes to your cyber defense sort of capabilities over-

    Gar O'Hara: Yeah, understood. One of the things the providers of technologies like this talk about, I suppose, is the safeguards they put in place. And, you know, if you ask, hard to re-... I'm- I'm stealing this, I- I cannot remember where I heard it on a podcast recently, but the analogy was to three printers where if you ask or try and, you know, print a gun on a 3D printer, the safeguards make it stop you from, you know, doing that. But if you ask it to print the barrel, print the trigger, print the magazine, like you can print the component parts and then just assemble it together. What are your thoughts around, like, the future safeguards and being realistic, will they be use-... like will there be efficacy there with the safeguards or are we just-... are we being told things to make us feel better about it?

    David Higgins: [laughs]. I think- I think we've- we've already seen a couple of examples of sort of responses to this sort of thing. If I recall correctly, there was an immediate concern that Chat GBT was gonna be used to cheat on academic exams and things like that.

    Gar O'Hara: Yeah.

    David Higgins: So somebody wrote a module that effectively allowed documents and information that were submitted these things to be scanned to see how much of it came up for Chat-... like, came- came through Chat GBT. That landed and almost immediately there was another thing that was developed to effectively work past that. So it's kinda like, it's this almost immediate escalation. And something else I also saw was that there had been something written that effectively- that effectively, I believe its intent was to make Chat GBT ignore its you know, its- its- its morality bits, you know? The things that- the things that allowed it to go "No, I probably shouldn't be doing that."

    But I think, you know, you make an excellent point. If you ask- if you asked the right question with the right context and the right scope, that individually does not constitute a major problem or a breach in the perspective of a thing that cannot discern context. You know? It's- it's still a- it's still a question and answers. So, you know, you- you say "What does this bit do?", it'll say, "I- I-... it does this", you "Great, cool." And, like you say, you just-... you- you- you build the barrel, you build the handle, you build the trigger, you build the bits and then you say "If I wanted to join these things together in a way that [inaudible 00:15:15]."

    Gar O'Hara: [laughs]

    David Higgins: And that question is also not against its, you know, against its- its pieces there. So, I- I think that regardless of how well people implement safeguards to prevent abuse of a system, you know, a language model like this, I- I think that you're always gonna be running the scenario where something asked in the right way of a system that doesn't have an ability to discern context will deliver an answer to the best of its automated abilities.

    Gar O'Hara: Yeah. I wonder whether we're gonna see a sort of sci fi author of today's world write the equivalent of, you know I think it was Isaac Asimov, wasn't it, that had a-... the laws of robotics or something like that. You know, I won't kill, yeah, yeah. Maybe we'll get some- some version of that.

    David Higgins: I mean you know, like... I- I- I believe that, you know that's probably a matter of time. But I- I think rather than the raws-... laws of robotics, I think, you know, we may end up in a scenario where we are probably just having to try and sort of, understand how rapidly it's evolving and working within- working within the broader context of that. You know, it's- it's a-... it's-... I think it's- it's one of those things where I think, you know, the cat is out of the bag now. And it's the first time that we're now starting to see a scenario where, you know, where human deception might actually become a useful mechanism for gaining additional information from automated systems. Which is an interesting concept and one that, you know, I sort of-... I mentioned it in an article that I wrote.

    But at the same time, I think it is one of those things where up till now, deception was something that was practiced by humans on other humans. And now we're getting into the-... sort of this world where we are able to ask the wrong question in the right way of an automated system and it's likely going to give us an answer for something that maybe it shouldn't.

    Gar O'Hara: It's- it's almost like a reverse Turing test or something like that, isn't it? Well, not- not really but the spirit of it is kind of the same. How do you see you know, 'ca-... obviously we- we- we and I think everybody else at the moment is talking about the potentials, you know, impacts for cybersecurity from the attacker's side. But AI and things like Chat GBT, there's a play here presumably on the defensive side and the kind of blue teams. Like, yeah, wh- wh- what do you see that as?

    David Higgins: Well do you know, it's funny actually. One of the analysts that was working for me previously le-... was had a- had a-... I was having a quick chat with her not too long ago. And it turned out she'd she'd been using Chat GBT to help refine specific sort of search queries that she was using across large data sets on the defensive side of things. You know? So... you know, using your common, some- some of the more [inaudible 00:18:05] common security [inaudible 00:18:06] is available there. You know, she was asking questions of Cha- Chat GBT, saying "I'm using this, I want to be able to do this, how can I do that?" And that was delivering a-... you know, a refined a refined experience for and-... Elegant queries.

    So I think that definitely is a force multiplyer for your analyst hours and actually reducing the number of hours that are required to investigate potential incidents. If [inaudible 00:18:35] used in the right way, you know, I think that, you know, Chat GBT and other AI and automation tools could be a significant force multiplyer there. In fact one of the products that I have used that I've used in a couple of the deployments and the implementations that I've done throughout my career, you know, it had quite a strong focus on AI and machine learning as a defensive piece. And one of the things that it kept referring back to was the number of analyst hours that it had saved by running all of this automation passes across there.

    And if I think, you know, I- I-... over a monthly basis, now this thing was-... it was, 'cause assuming massive data sets, you know, in the order of magnitude of terabytes of raw network data per day were flowing across the sensors. And it was sort of able to say, "Well, look, you know, of that- that volume of information, there are maybe 12 to 15 things that I think you should look at." Everything else, this goes into a large bucket on the side here but it doesn't really hit the threshold. And therefore it's saved you, you know, we've actually saved you like 35 hours of analysis per [inaudible 00:19:40]. And it's like, you know, those-... if you're able to start talking in those pieces, you know, it- it sort of comes back to this idea of ultimately you are going to need to have something human or somebody look at it and effectively quality check the work. But, you know, you can- you can also massively scale up the amount of information that you're able to ingest, refine and and get that-... sort of that first cut of by by embedding good machine learning and AI capabilities into the organization.

    Gar O'Hara: And is there anything there around, so you've talked about terabytes of data and, you know, there's a point where humans, I mean, literally can't do that. But, you know, to get to the point where the models are good enough that they're able to identify signal, even weaker signals than a human ever could in- in just the sheer volume of noise, especially in some environments. And here I'm thinking of probably like third level education, like there's- there's organization types where they've got really funky network traffic. To get to the point where the models start to get better at that or, you know, they can see things that a human probably never would be able to.

    David Higgins: Yeah, I mean, definitely, I've seen some really-... I have seen some really pieces of this sort of probably that- that link quite closely to those. You know, particularly if you're looking at, like, long form type of tacks, you know, where people are trying to hit up specific things, low volume, long period of time. They figure out the thresholds to be able to operate before anything hits an alarm bell and then they set kind of a percentage under that, so it's [inaudible 00:21:11]-... it just- it just hits that background noise pieces and it kinda just, you know, it would- it would- it would never even get flagged up on anything. Definitely I've seen a couple of pieces where, you know, where having the right tools in place have been able to help identify that over over time. Usually, usually you are putting those in after something has happened. [laughs]

    Gar O'Hara: Yeah.

    David Higgins: So- so... you know it's one of those pieces where until something happens in this space, you don't look at the right tooling and capabilities to be able to sort of defend against it because you've not really got the use case figured out and the organization to look at those things specifically. But... I- I think that, you know, for me, the, the automation piece is really, really important. I think that it will be able to reduce the amount of overhead on analyst teams. But I never think that-... I don't think that it will ever get to the point where it will be sort of the main thing that we rely upon for verification of, you know, the- the validity of a type of bent.

    But, you know, there- there's no question to me that the- that the the amount of processing and the ability to make longer correlation happen is better with an automated system rather than a human system. Because we've- we've got short memories. You- you and I were talking just briefly about, you know, how we have to keep lists and everything, right? Yes, exactly [inaudible 00:22:41].

    Gar O'Hara: [laughs]

    David Higgins: I j-... I think we were, I'm not sure, you know? So, you know, we have- we have we have very short attention spans and and memories. And, you know, that's not something that computers have. And of course if it's- if it's doing the same thing time and time and time again and you've got it looking I the right way, then it's gonna go "That seems abnormal when I consider the other context of your network." So, you know, I do think there's value there. But, like I say, I- I- I still believe that humans are where it's ultimately at. It-... Security is a- is a people-led capability, in my opinion.

    Gar O'Hara: Yep, I d- definitely get that. And probably you-... Before we kinda move on to the people conversation or part of the conversation fully one of the things I've been kind of mulling on lately with AI, and it's not just in cyber, but I would say there's- there's many roles, many jobs that you go into. And part of what you do in the first, you know, depending how the role, like it could be a year, two years, three years, sometimes five years. It's the kinda donkey work and the repetitive, what seems like really, why would anyone ever need to do this. But you're actually building muscle memory and you know it seems tedious and boring but actually you're, you know, there's learning by osmosis, that its become a part of your DNA. Thinking here legal I'm guessing some versions of cyber like [inaudible 00:23:57] and sort of sales engineering, there's a bit where we just, I mean... we do lots of demos and we do lots of talking and- and, you know, eventually kinda get good at it. Do you feel like, if we- if we let the machines do that, we skip some part of like foundational learning for humans before they become, you know, a sort of black belt cyber analyst? Like we're stealing something?

    David Higgins: I- I-... that's a fairly great question actually Gar, I like that question. Because immediately it brings to mind an earlier part of my career. I was a systems admin for the Honda motor company in the UK for a couple of years. And, one of the things that I saw there they obviously went through a period of highly automating and putting a lot of robots into their production lines on the actual, you know, the assembly for the cars that they were making there. The thing that I really got from it was the- the main benefit was of course an efficiency piece, you know? 'Cause, you know, you're absolutely right, you know, there are- there are things- there are things that need to be done exactly the same way, time and time and time and time again for a vehicle to go from one end of a production line to another.

    But the thing that I always- always took away from it was that at certain points in the production line process, the vehicle in its assembled state, part or fully, would have a, would have humans interact with it and engage with it and do specific tasks. And there were things that it needed to be able to check, verify and validate, and that needed to be done by people. I think that there will definitely be a scenario in the future where somebody might not necessarily know exactly where they're looking for a data set and they won't be able to go backend, like "Okay, I need to look at this log from this date from this thing for this place." Because it just won't be part of the- the, of the- the, I suppose, the experience of being a cybersecurity defense analyst.

    But at the same time, I think that whilst it's important to understand where your data is sourced and, you know, the- the- the theory around that, I believe that as the practice scales up and that need for scale is continuously applied, you know, there are bits of it where we have to accept that the- the new- the new, I suppose, the new menial and repetitive and experience building work is actually some of the stuff that for us today is already looking pretty cutting edge. You know? It's becomes the-... it becomes the new normal. And of course, you'll always have that old guy sat in the corner going "Back in my day it used to be cis log as far as you could see", and- and, you know, and that's- and that's fine, and that- that- that person needs to be there as well to be able to provide the support on the day that it does go dramatically wrong and people are like "Well, all of the automated stuff doesn't work any", and the old guy comes along and does the equivalent of picking up a spanner. But, for most of it, I actually think that we're- we're kind of-... we're in that period of technology adoption where we have you know, where what is expected to be known and understood to be able to do the work of a security professional is is changing.

    Gar O'Hara: Yeah. You've actually kinda started me thinking on a different track and I'm- I'm starting to agree with with that perspective. A couple of things I'm thinking there is, we've gone from a place where memory was rewarded in many jobs to the ability to be very good for finding information, you know, through the internet or whatever became a- a core skill and in some ways more important than the memorization.

    David Higgins: I- I have interviewed any number of people where I- I've said, "So if you don't know this, how would you find it out?", and I'm really hoping that they're like "I know how to build a good Google search", it's like "Brilliant. You've got what I need."

    Gar O'Hara: Well as a- as an ex coder, like I very quickly learned that, you know, the best coders are the ones who just know how to go to stack exchange and find the [laughs], find the good code to steal. So yeah. Yeah, that's- that's-

    David Higgins: Nothing is truly written anymore Gar, it's all borrowed. [laughs]

    Gar O'Hara: It definitely is. I mean, I'm very conscious of time and I'm very keen to talk about the people side of things. You know, obviously it's a passion topic for you, it's part of what you talked about as CyberCon. And I- I'd love to get your, like as an opening kinda comment, do you think there's something specific going on for CSOs that is different from other high stress jobs? High stress jobs have been around for a really long time but it feels like there's something different going on for CSOs.

    David Higgins: They have. And- and so... it's-... it- it really made me think actually Gar when I saw this question. And, you know, of course, I- I- I guess my experience is limited because, you know, my, my experience is largely around [inaudible 00:28:44] the information security industry. But you know, there- there are a lot of other very high stress jobs out there. But I think for me one of the biggest things is if you look at those jobs there's a very clear disconnect between when you are on them and when you are off. And it-... that's been enforced in- in those areas, you know? That's been enforced in if I think about, you know, if I think about some of the first response services that- that have, like, prodigiously high stress levels, like fire and emergency responses and things like that, they work shifts. When they are not on shift, they are not on shift. They do not go in.

    One of the things that is, I would say is- is quite common in the information and cybersecurity space is you never really switch off. You know? And if you don't switch off, you never get that ability to actually stop, breathe and understand where you are outside of this continuous cycle of threat, vulnerability, you know, new things. And of course, with the ability for this stuff to just happen at a continuous pace, and of course it's getting bigger and more complicated and we're getting better understanding it. So it's always something happening. You effectively have an environment where you have a lot of stress, a lot of moving parts, and very little down time for a lot of the teams. Because even if you are off, you're never really off.

    Gar O'Hara: Yeah it's like it's something I've kinda thought about a little bit. I had a a guy who's become a friend and she's-... he's sort of works for one of our customers. But he, we're having dinner one night and he kinda mentioned the- the warring of the brain. Where, if you're constantly dealing with negatives and looking for the thing that's wrong and how you're gonna be attacked, your- your brain sort of like eventually becomes programmed to look for that even when you're outside and you're at home with your family or, you know, you're- you're out shopping or you're driving. It just becomes your default, you know, you're- you're sort of constantly in fight or flight mode.

    David Higgins: Yeah. And I think that's probably something that we're not doing very well at industry level. Is, you know, we're- we're not actually recognizing-... we're not really recognizing that, you know? And then of course, one of the other things that I thought about, and it's something that I do think is- is something that we're probably-... we need to- we need to work more on you know, across multiple industries. Is the, if, you know, when you go through a major incident or an event within an organization in a lot of sort of the other sort of first response, one of the things that they make sure that people do is they make sure that they get- they get, you know, rested, they get rested and rehabilitated before they're brought back in. Whereas, you know, in a lot of the sort of, you know, in a lot of the cybersecurity and response scenarios, once you're done, that's-... you're just- just back to BAU. You know? There's no-

    Gar O'Hara: Yeah.

    David Higgins: ... there's no- there's no rest, there's no no downtime, you don't get parked. Because organizations aren't scaled to be able to rest their security team.

    Gar O'Hara: You're making me think of the first season of Band of Brothers, which felt a bit like that, you know? I don't know if you've seen that show but you know, they go from one battle, they- they seem to go away for a day to have one, you know, a cigarette and a- a beer and then the next day they're back at it. And...

    David Higgins: That's it. Yeah, I mean, I- I couldn't tell you how many security analysts I- I've spoken to who are pretty much running on black coffee and cigarettes, you know? It's it's- it's not a small number.

    Gar O'Hara: Yeah, it's scary. Do you- do you think it's kind of understood by other- other parts of the organization? So that idea of stress and burnout. I think it's talked about a lot in- in general sort of life. And I think there's a better conversation happening these days around mental health, it's much more public, you know, we see sports stars, like, it's- it's been normalized in a useful way. But within sort of the organizations you've worked for and- and work with, how do you- how do you kinda work with peers in leadership or other teams to kinda help them understand the uniqueness of the stress that security teams are going through?

    David Higgins: Yeah, no that, I-... that's another really great question. You know, I've- I've been fortunate enough to be able to help shape security aware cultures and the last sort of three organizations that I've worked in mainly through sort of my security leadership roles, I think you- you know, the understanding is there at a very general-... very gen- general purpose level, I would say. And certainly that's been my experience when I've entered into these organizations is, you know, they're like "Okay, well we know- we know we need to be practicing cyber, we know we need to have, you know, a certain amount of controls in place. We- we wanna have a better understanding of structure and the things that we need to be able to do to be able to get that uplift." And, you know, that's a- that's a good start. It's a good starting place. Because if- if- if the- if the organization is engaged, it's at least open to the conversation that you need to be able to have to talk about some of these things.

    You know? It's like one of the most common things that I find is when I go in to you know audit risk committee board meetings, things like that, and organizations. You know, the-... those people on the boards are sort of going, "Well how can we- how can we not be a target for this?" And I said, "Well, sorry, you're always going to be a target. The understanding you need to bring to mind is that over a long enough time frame, you're going to have an incident. But ideally, if you structure it well and structure it correctly, there will be a low impact in terms of your overall on how much of a victim you are of cyber crime." You know? And- and it's that piece of-... that- that area of going, well, rather than trying to avoid it, understand it, understand what it looks like when it happens, and then think about where you want to be able to get to in terms of your ability to be resilient against the inevitable.

    Gar O'Hara: So if I'm understanding you correctly then, the- the reframing away from, you know, "We're gonna protect and make sure it never happens" to "Hey, let's have an annual conversation, it's gonna happen", that acknowledgement of reality sort of reduces the kind of-... the texture of the stress or- or helps to?

    David Higgins: It- it definitely helps. The other thing is, you know, I- I think the-... You know, it helps in so much as- as once- once you have that understanding, you're able to start having really meaningful conversations about building capability about, you know, what does resilience look like for this organization? You know, you're able to start having meaningful conversations with leaders and executives about, well, you know, why do we practice running through incident response scenarios? And by the way, if we haven't done that in the last 12 months, we're going to do one here and everybody needs to be involved. You know, there's, it brings to mind one of my favorite stories, and I- I probably brought it up in the- in the security conference, but it was about Rick Rescorla who was the head of security for Morgan Stanley Bank in the World Trade Centers.

    You know, and this goes back to- to sort of you know, sort of obviously pre- pre 9/11. But he, he recognized that the- the World Trade Centers were quite vulnerable and and he was-... he- he'd sort of put together a piece where he- he expressed to the executives and the leaders of Morgan Stanley that he felt that- that where they were placed placed them at risk of a potential terrorist attack designed to- to attack the World Trade Center. Now obviously everyone focuses on 9/11, but prior to that there was a guy that actually did put a bomb in a car in the basement and they put it exactly where Rick Rescorla said they were gonna put it. And so that got him a lot of influence with the executive teams.

    And one of the things that he insisted upon and got buy in for was that it was mandatory for every member of staff to evacuate via the stairwells on a quarterly basis from that point forward. Now, Morgan Stanley had about 20-odd floors of the World Trade Center, starting at like 42nd floor. You know, they-... it was a lot of stairs that they had to go through. And and obviously it was a number of years before- before it went from that instance to the World Trace Center occurring. But every quarter, every member of staff evacuated on foot from the World Trade Center buildings for Morgan Stanley. And that meant that when 9/11 happened and the evacuation order was given by Rick Rescorla, because the Port Authority of New York told everyone to stay at their desks and he went "No" that- that he was able to get people to get out of that building because he'd built the right influence and he knew where he had the ability to control whether or not a thing happened.

    So, you know I think, you know, those understandings and and that ability to go to- to- to have meaningful, not just meaningful conversations, but with good outcomes and the ability to understand what you're able to control, what you're able to influence, and then the things around the outside that are just general concerns and things that you- you don't-... you know, they're- they're out there, they're gonna happen, you know? The idea of a security incident occurring is a concern. You know, over a long enough time frame it's going to happen. What are the areas that I can influence to try and minimize the impact and what are the things that I can control to respond when it does happen? And that's kind of I think, for me, looking at it in that context and with those kind of frames of reference starting with the control, then the influence and then finally looking at the concerns.

    Which is kind of like an inside out frame of reference ver- versus a lot of professionals who look at it from the things that are concerns. And that's kind of like your outside in frame of reference, which is far less effective in my view. And definitely something that I've found is that when you are focused on the things that you can control and the things that you are able to influence to support, you know, the- the concerns that are going to happen, it does let you grow your influence, which is kind of that area where you can really engage with the organization and the business to be able to get mutually beneficial outcomes, you know? Good for them, good for you, good for everyone. Being able to get success in that space is definitely one of the most rewarding and also one of the best ways of reducing overall stress levels for me as a security professional.

    Gar O'Hara: Reducing those overall stress levels. I- I think we're- we're about to run out of time, so I'm just gonna end on the-... that personal level though. I've asked this question of a few people but I'd love to get your take on, like what- what do you see as the early warning signs? Like, when you see regular stress and the-... maybe the coffee and cigarettes is transitioning into something that's much more serious, like burnout. And- and how do people take care of themselves?

    David Higgins: That's a great question. It really is. And I think that, you know, everyone- everyone's idea of self-care is- is different. You know, I've- I've- I've thought long and hard about this. You know, as I mentioned at the start of our talk Gar, I'm- I'm currently on a break because I recognized that I recognized that I needed to be able to focus on myself for a while. You know, I- I feel quite privileged to be able to do that, that's not something that everybody who is-... who needs to be able to take a break is in a position to be able to do. But definitely, for me, one of the main warning signs was just sort of a general lack of satisfaction with everything. You know, it- it sort of... if the things that normally would bring you joy and pleasure suddenly not suddenly, but over a period of time stop doing that and you're not, you know, you might not see it.

    You might not see it initially because when you're in these situations, it's a lot like the- the adage of boiling the frog, you know? You're in the water and the water gets heated slowly and you don't recognize that something's wrong until it's- until it's too far gone. But, you know, it- it it took me- it took me quite a while to- to recognize that actually, you know, there was a lot of things that I used to do that I enjoyed and I just stopped doing them because I was focusing more on work. And this-... sort of this view that if I just give it a little bit more, I'll get over the thing. And-... but the-... you don't get over the thing because there's always another thing to come in and consume that. And you know, and it- and it ends up being it ends up being absolutely consuming, you know?

    Some of the classic signs, yeah, you can't switch off. You've kind of got, you know security I think has got a bit of a problem with ego. And I mean this in a way where, you know, because we focus exclusively on defending and protecting we think that the organization won't be able to function if we're not doing our be-... and we're not actively protecting all the time. So I think being able to sort of recognize that and maybe deplete the ego and go actually, I need to create systems whereby I can not be here and that's okay is probably one of the ways that I would say that you- you know, to- to take care of to take care of your team within the organization. But in terms of taking care of yourself the only real advice I've got is that we just-... you need to practice being kind.

    Something that I've said to a lot of people, and something that I've been very, very guilty of is, you know, I can be very self critical. And this is something that I've her-... seen and heard a lot of people saying themselves. And I once asked somebody who I heard, like, they were giving themselves a hard time and and I- I said to them, "Would you ever let anyone else talk to you the way that you talk to yourself?" And they were like "No", I said "Okay, well why- why do you let yourself do it?", like, "Well, I don't- I don't really mean it", I'm like "Hmm, you keep saying it to yourself and you will. You know, you'll start believing it." And- and it's-... I think it's something that, you know, that we, that we- we forget. And and you can lose sight of it very easily. Particularly if you're in a stressful environment. So, you know, again, the only real advice I've got is be kind. But understanding what being kind looks like goes a long way to helping you out there.

    Gar O'Hara: Fantastic David. It's been an absolute pleasure and I do mean that, to- to talk to you today. I think you're you're one of those guys I could- I could see myself sitting in a pub with for, many hours and talking cyber and all the other things. And in my head I'm wondering is one of the things that you used to enjoy [inaudible 00:42:37], is that like wearing a kilt in the kinda savory breeze that maybe blows around?

    David Higgins: [laughs]. Oh, you know, well look the- the truth is is that like any good Scotsman kilts are very expensive in terms of an overall investment, and you gotta make sure that you get your maximum return on value from it. So, you know, yeah, Gar, look, Wellington is a famously windy city in New Zealand, but even there, I- I- I always love putting it on for the right event. So you know, hey likewise, it's been a real pleasure to have a conversation with you today Gar. I think, you know, you're- you're asking some great questions and the opportunity to explore these parts of the- the role and the work that we do is is probably not- not considered often enough in my view. Look, I- I really do hope that we get the opportunity to sit down, have a couple of beers and talk all things cyber and otherwise, and maybe I'll even wear the kilt.

    Gar O'Hara: Well, I- I look forward to it, if- if that's on offer then I'll definitely be there. Thanks so much David.

    David Higgins: Sounds good. Thanks Gar. Appreciate that.

    Gar O'Hara: Thanks so much to David for joining us for the podcast. Such infectious energy. And thank you for listening to the Get Cyber Resilient podcast and do jump into our back catalog of episodes and like, subscribe, and please do leave us a review. For now, stay safe, and I'll look forward to catching you on the next episode. 

    Back to Top