101. Crisis Talks with Grant Chisnall


Claire is joined by Grant Chisnall a crisis trainer, advisor and podcaster, who has a passion for leadership communication and decision making. In this episode they covered a lot of ground including the escalation from incident response to crisis management, they talk about business collaboration before an incident, and how to plan for resilience while mopping up a cyber incident.

Grant has supported some of the world's leading organisations through crisis events ranging from cyber attacks to coronavirus; activism to air crashes; and from Natural disasters to workplace fatalities. His podcast ‘Crisis Talks’ tells the extraordinary stories of people who have led through crises and their stories of leadership and resilience in the face of adversity. Grant’s aim is to help leaders prepare for the worst-case scenarios and respond proactively and with confidence to any incidents that threaten their people, operations or reputation.

Links:

Grant LinkedIn

Left of Boom website

The Security Collective podcast is proudly brought to you in partnership with LastPass, the leading password manager.


Transcript

CP: Hello, I'm Claire Pales and welcome to The Security Collective podcast. Today's guest is Grant Chisnall. Grant is a crisis trainer, advisor and podcaster who has a passion for leadership communication and decision making. He has supported some of the world's leading organisations through crisis events ranging from cyber attacks to coronavirus, activism to air crashes and natural disasters, to workplace fatalities. Grant has his own podcast 'Crisis Talks' where he tells the extraordinary stories of people who have led through crises and their stories of leadership and resilience in the face of adversity. When we came together, Grant and I covered a lot of ground including the escalation from incident response to crisis management. We talked about business collaboration before an incident and how to plan for resilience while mopping up a cyber incident. If you liked Grant insights today, please check out his podcast. But for now, please welcome crisis expert Grant Chisnall to The Security Collective. Grant, thanks so much for joining me on The Security Collective today.

GC: Many thanks for having me Claire.

CP: So most cyber professionals will spruik that cyber is in general a shared responsibility but narrowing down to the work that you do, you see cyber crisis preparedness as the shared responsibility. Tell me a little bit about this and where organisations should be focusing their efforts.

GC: I think a lot of organisations rightly assign the responsibility for the preparation of these sorts of plans, their cyber teams or their IT teams. And that's a really important step, I think, first and foremost, because you need the technical understanding of what might occur in order to prevent these sort of things occurring. But what you need from a response perspective is actually an integrated response completely, right from the ground up. So from the point of incident management into activation of the appropriate response, and a connection with crisis teams, and boards, all those things require integrated planning. So the biggest challenge, I think, is when the IT teams are planning for these things, ensuring that they have the right business engagement. And for the business teams, it's actually about engaging with their boards, and making sure that those boards are also across what the plans would look like. How they would be likely to respond to different scenarios, for example, whether they would or wouldn't go down a ransom payment pathway. Whether they would initiate or when they would initiate data breach notification processes. The stakeholder engagement pieces are so crucial. And obviously, then we're looking at the liquidity positions that can result quite immediately if you're a consumer based business. So all those effects require an integrated response. And that means preparing together planning together, and actually getting into testing and practising together.

CP: I think that's one of the key things that gets forgotten is that the document gets written often by an individual team. And even if there is some socialisation, that practising of that document so that it is live, and it is understood, often gets forgotten. And it's such a shame because these documents become shelf ware, and probably don't get leveraged in the best way that they can.

GC: They always say plans are useless, but planning is essential. And taking those plans and keeping them alive in terms of what you're doing to monitor the ongoing environment in which we're operating within. Monitoring also, the sensibilities to these sensitivities rather, to the situation. So I think we've seen recently with the supply chain issues, and then the travel disruptions we've seen recently, right now we've got a situation, we've got an IT glitch, they've caught it, which has affected ticketing systems. The tolerance levels are so low to these outages at the moment, that the outrage becomes really high. So all of the systems that are in place around monitoring need to also be aligned and integrated with the risk awareness and the risk tolerance levels on a live basis. So that anytime when you're confronted by a situation, you're assessing it. You're activating the right level of response, and you're really integrating that planning into reality in your response measures.

CP: And it's interesting, you talk about that, you know, taking the right level of response and really understanding how your organisation is going to deal with it. Because what we see a lot of I think, is people see a little bit in the news, a little bit of a snippet from the media that says, you know, such and such organisation has been hit and there's a big focus very, very quickly, and then that tends to fall away. There's very little focus on the mop up that can sometimes go on behind closed doors for months and sometimes longer. What do you think organisations can do to improve their resilience in terms of a better or more efficient mop up and really thinking through as you said, that planning it's hard to do because you don't know what sort of an incident you're going to have. But I just think the mop up is part of the part that we're not planning for the best.

GC: I couldn't agree more. And I think we're talking about in terms of left of boom, where what's the purpose, the organisation, what's the why that's so crucial that differentiates you from your market that you need to protect/ pre-empt is really about the things in the plan that you're doing to pre-empt and identify and monitor the situation before they arise, so you can ideally adapt to situation before they emerge. And then preparing is exactly as we spoke about before, it's how the teams come together to practice and also to drill what they're going to do in response. So then when you get right of boom, you're talking about response, reassurance and recovery. Now, recovery starts to the point of that incident. And so if you're not structuring yourself to recover, then all you're doing is continually finding yourself in a response cycle, which you want to get out of very quickly. And I think the mop up considerations are often underestimated. The impact on these disasters can be felt for months, if not years afterwards. In the analysis that we've performed, not just on cyber incidents, but also for crisis events, the average times that we're talking about, they’re generally about one year to 18 months, from a recovery from these events. And when you think about it, you're thinking about the people recovery. So you've got people, even if it's a cyber incident, what you've had happen is you've had a question of trust. You've broken the trust, for whatever reason, because something's failed. And so the trust of your consumers, the trust of your customers, or gets filed through your frontline employees. And so the internal trust as well becomes often overlooked. And when we get to a situation where, you know, George Bush Senior says, you know, the war's over and the flags go up, and everything's, you know, back to BAU or what other language we might use from a business perspective, that tale of recovery still continues on. So we've got to invest in that and be prepared, and scale your teams accordingly so that you can manage that recovery in the long term.

CP: I just want to come back to something you just said that I found interesting, you used sort of three R's, response, reassurance and recovery. Tell me about reassurance.

GC: Reassurance is really about that battle for trust, to regain the trust that you might have broken. We use a mnemonic called the four A's or abbreviation rather called the four A's. And in the four A's, we talk about acknowledge, apologise, assure, and act. And the acknowledge is really acknowledging you've got a problem. And what's your part has been within that apologising is obviously, about you stepping up and saying, and we're sorry for what has happened, the effect that we've caused the impact you've created within whichever stakeholder base you're dealing with. But then the assurance piece is probably again, one of those things are often overlooked. And assurance comes from showing what you're doing, demonstrating openly and transparently what you're doing, which often requires really third party support. And in situations where you've had an IT outage, for example, which is of your own doing, that's a cause of your own, then it becomes even more and more important that you have a third party that steps behind you to say, look, we have also tested and validated the plans or the recovery plans or whatever it may be. And can also give that extra layer of assurance to stakeholder base that what you're doing is what you're saying. At the end of the day, we want to see that action follow through. So you're saying what you want to do, but actually doing it and following through to the action point. A great example of that assurance piece, I think recently was the CHESS outage from ASX. They had I think EY or one of the accounting firms came in not just to validate what they done from an incident or an outage issue to start with a few years back, but also to further validate their planning as a third party step as well.

CP: Which I think is incredibly important for organisations who have gone through a crisis. I'm sure it can be very isolating when you're going through a crisis that in many instances, no one else has been through the same thing. With cyber, every organisation's maturity is different. You might have the same virus or the same malware or the same type of ransomware incident, but for your business, your response is going to be very different. And to have a third party come in and say actually, these are things you did well, and here are some steps that you need to take to aid your recovery would probably be quite comforting, in many ways. It maybe a bit stressful at the same time, but at least it gives you a way forward as well that you don't necessarily have to work out for yourself.

GC: It's a great validation, either way, if you're not sure. And a lot of these times, it's the first time that organisation might be dealing with this type of event. And having a third party, you know, the role I perform is really a coach in a crisis situation. So I'm helping them validate their processes from a crisis management perspective. I'm helping them think about what's going to be the next steps how you think about the recovery, how you think about those stakeholder and impacts and those issues that you need to address. But a third party like an accounting firm or a cyber firm, those sort of things also give you the technical assurance that you need to really validate your plans. So, SMEs, third parties for assurance piece become a really important step.

CP: And also potentially give you the confidence for your third parties to reconnect with you, or for you to reconnect with a third party after what might need to be major cleansing of your IT networks.

GC: I can believe that you probably know what you're about when you're talking about your own systems, your own architecture. But equally, do I really believe or trust when you've broken that confidence, doI really believe and trust you now? I'd rather hear from somewhere else to further validate that as well. So it's always a good addition to validate your plans to have a third party to provide that assurance.

CP: The other thing I wanted to loop back to was something you mentioned earlier around bringing the business together and across functions. And if you're going to future proof an organisation and be prepared, how is bringing those functions together, strategically going to help the business to be future proofed?

GC: Future proofing for organisations is crucial from that preventative stage, as well as having that alignment with the overall business strategy. And so it's okay, in some cases for different teams come together to plan what they might be doing to address contingency. But they also need to have the consideration of the longer term strategic intent of the organisation. So a plan, for example, about failing forward might involve failing forward to a new set of infrastructure, if something does fail at this point in time. If you have a situation where you've broken something, then you might be looking at another third party as part of enhancing your resilience to the future events. Either way, bringing the whole organisation together, the leadership, alignment with board, but then also the IT or technical teams and the operations teams helps really shorten that gap in awareness between what has happened and the expectation around when it can actually be restored. It's probably the biggest issue we see, is that the more technologically savvy an organisation is, the closer that gap is. So people understand if they're, if they're technologically proficient, they'll understand that if something happens, it's going to take X amount of time at least. But when they're not, and they're just relying on turning on a computer every other day, and let's face it, most middle aged males around my age and above who are in these senior positions often struggle with those basics on tech. Then the challenge there is really how do you manage their expectations if something goes wrong? How do you manage your expectation of a complete failure scenario? So the whole point about that planning and bringing everyone together is about closing that gap in understanding and then closing any gaps around expectations post incident.

CP: I think that's a really good point around the IT system outage part of a cyber crisis, because we spend a lot of time focusing on resolving and containing and making sure that "cyber" crisis is responded to and you know, we get the forensics in, and it's very much the focus around the crisis in terms of the cyber attack. But any IT outage, whether it's caused by a cyberattack or not, is going to cause issues for an organisation. And for some departments more than others, and it depends on which part of your business is impacted. You know, if it's quality assurance or it's your finance systems, you know, if we help people to understand how these systems work, and why they work and you know, get people together collaboratively, do you think that's going to help in the face of a crisis to make sure that people are more understanding of the kind of IT outage part of it?

GC: Yeah, I think the key difference with an IT outage versus a cyber attack is an attack you're a victim in an outage, you're the perpetrator. You're the one that's caused the problem in this instance. So it's actually more of an own goal, from a reputational impact from an internal impact. And if it's been a change that you've been implementing that hasn't been planned for probably by the IT team, or the architecture team, the network's team, whichever it would have been involved in that part of that change process, then that's really a failure of your process. So it goes back to that question of trust. So rebuilding that trust, making sure you've got the right teams together around planning for these impacts, planning for these changes, and not just relying on okay, we're going to turn this thing back on tomorrow, it's going to be all on and everything's going to be fine and dandy. The whole point about these things is planning and embracing failure. So if it does go wrong, that you've got the right contingencies in place, you've got the right escalation, protocols in place with the right people. And those people are then pre-emptively aware of the situation and can further pre-empt any issues with the downstream stakeholders by being active when they need to be, early.

CP: A lot of organisations are really focused on the documentation. And we talked a little bit earlier about you know, how the plan is important, but the planning is actually what we really want to get right. A lot of organisations come to me and talk about the fact that they've got a document because the regulator needs one or because every time they get IT general controls audit, they've got to have these documents Why do you think the regulators and the auditors want you to have a document that can sometimes run into the hundreds of pages?

GC: I think it's like a security blanket. And so, they want to have something they can hold on to say, yes, we have proof that it's been done. And we have proof that they've been through something. And we can also then hold them to account in case it does go wrong. And probably, if you think about it, from that point, backwards, you go, okay, if it goes wrong, and you haven't got the right preparations in place, you haven't got the right preventive protocols in place, you haven't got the right monitoring controls and those things in place. And you haven't then also executed on a lot of the requirements you've got from NIST, or Essential 8 or any of those other sort of protocols, then you've really got a problem. Or even more so when you have a plan that said, we've done these things, but then you haven't. So the point about planning and continually upgrading and updating these things, and actually practising and preparing your organisation for these things, is more important than showing just a plan to say that we've got a DR Plan, we've got this particular piece of paper that says, here's what we're about. I think the real good auditors are really good regulators now. When they're looking at your plans and not looking at where they're just making an attestation that you've aligned with whatever framework, they're actually digging a bit deeper now. And they're actually getting into have you actually trained it, or your team's aware of it, have you done an exercise to validate that plan, and have you done the drills and simulations that also practice the execution of that plan under duress? So the really good ones, I think now, the good organisations are not just looking at that one document, they're looking at how that integrates with their incident management planning from an ITIL process. Also how they're escalating into their crisis management frameworks. How that ties together with their business continuity plans when they've got an outage. And importantly, how they're also going about the disaster recovery, and restoration, validation and testing post. So all those things become part of that ecosystem of controls now that are so crucial, that are no longer standalone.

CP: Are you starting to see cyber inside the business continuity plan documentation? Or are you seeing the Incident Response Plans covering cyber with, you know, an escalation path or a note to say that, you know, should it get to this level of complexity or criticality, then we move on to the other document and into a new phase of crisis management? Are you seeing the two documents start to blend or are you seeing that they're standing alone?

GC: It's horses for courses, and it's also, I think, as a result of the jargon that's created within the industry, whether it be the IT industry, the risk industry, or the industries within which we're dealing with. We're dealing with a lot with, you know, logistics, mining, critical infrastructure, as well as financial services. So all those industries have their own nuances around the way they approach and the requirements they've got for planning. And so we do see a number of different approaches. I think the best case now is you've got integrated plans from the ground up. And so that's one of the critical success factors we talk about is that integrated planning. That you've got integrated communications. So you're factoring in the facilities, the equipment, the systems, you've got backups around all of those. So if you do have a complete bare metal restart, then you've got the means to at least communicate with the appropriate stakeholders you need to, and escalate the right protocols. Your crisis plan should be the same and crisis teams, ideally, will be the same, give or take your technical components depending on the scenario you're dealing with, whether it be cyber or physical event. And at the end of the day, your BCP and your Disaster Recovery plans all serve a purpose around continuing the business while something is down and recovering and restoring quickly. So you can actually recommit to your stakeholders in an assured way.

CP: So if you could give the listeners your best tip, when it comes to cyber and business continuity planning, what would be your one piece of wisdom that you would impart today?

GC: Look, I think early activation is key. Because early activation enables the understanding, or a common understanding of the situation that you might be dealing with. It brings together the right experience people to assess what you're dealing with, and then start to generate the appropriate response. And then ideally, that early activation enables you to pre-empt any issues in the longer term. So if you could focus any efforts on ensuring that you're getting the right people around a problem quickly, then that'll address 95% I think of any scenarios. Whether it be a cyber event again, a physical event, whatever it may be. So that early activation, I think is key.

CP: I think the answer that I get to that question from cyber insurers, digital forensics, all of them have exactly the same opinion as you and that is early, give us a call, you know, let us know. Get the right people around the table. Because you'd rather a false alarm than a much bigger disaster. So yeah, good advice, I think.

GC: Yeah, I think we use the metaphor now like, remember first aid used to be Dr. ABCD, or actually D came in lately used to be Dr. ABC. Now it's Dr. S ABCD. And so the S is sending for help. So you can danger response, send for help. So getting that message out there, the awareness out on the situation you're dealing with, builds out what we call again in military or emergency services parlance a common operating picture, or builds situational awareness there, again, some of the jargon terms that we use. But the point is everyone understands what the problem is, and then they can start to deal with it in their own respective functional way and ensure that they are stepping ahead of the situation as it emerges.

CP: Thank you so much for joining me today. I think we could probably talk for another 20 minutes about BCP and cyber but I've really enjoyed the chat and thank you so much grant for being with the security collective today.

GC: Awesome, hanks for having me Claire.

Previous
Previous

102. Cyber in local government with Paul Barrett

Next
Next

100. Celebrating 100 episodes!