Connect with us


As cybersecurity evolves, so should your board



As cybersecurity evolves, so should your board

But how many directors get lost in the technicalities of technology? The challenge for a chief information security officer (CISO) is talking to the board of directors in a way they can understand and support the company.

It’s drilled into the heads of board directors and the C-suite by scary data-breach headlines, lawyers, lawsuits, and risk managers: cybersecurity is high-risk. It’s got to be on the list of a company’s top priorities.

Niall Browne, senior vice president and chief information security officer at Palo Alto Networks, says that you can look at the CISO-board discussion as being a classic sales pitch: successful CISOs will know how to close the deal just like the best salespeople do. “That’s what makes a really good salesperson: the person that has the pitch to close” he says. “They have the ability to close the deal. So they ask for something.”

“For ages,” Browne says, CISOs have had two big problems with boards. First, they haven’t been able speak the same language so that the board could understand what the issues were. The second problem: “There was no ask.” You can go in front of a board and give your presentation, and the directors can look like they’re in agreement, nodding or shaking their heads, and you can think to yourself, “Job done. They’re updated.” But that doesn’t necessarily mean that the business’s security posture is any better.

That’s why it’s important for CISOs to raise the board’s understanding to the level where they know what’s needed and why. Especially when it comes to new advances in cybersecurity, like attack surface management, which is “probably one of the areas that CISOs focus least on and yet is the most important,” Browne says. For example, “many times the CISO and the security team may not be able to see the wood from the trees because they’re so involved in it.” And to do that, CISOs need a set of metrics so that anybody can read a board deck and within minutes understand what the CISO is trying to get across, Browne says. “Because for the most part, the data is there, but there’s no context behind it.”

This episode of Business Lab is produced in association with Palo Alto Networks.

Full transcript:

Laurel Ruma: From MIT Technology Review, I’m Laurel Ruma, and this is Business Lab, the show that helps business leaders make sense of new technologies coming out of the lab and into the marketplace.

Our topic today is cybersecurity and corporate accountability. In recent years, cybersecurity has become a board level concern with damaged reputation, lost revenue and enormous amounts of data stolen. As the attack surface grows, chief information security officers will have increasing accountability for knowing where to expect the next attack and how to explain how it happened.

Two words for you: outside-in visibility.

My guest is Niall Browne, who’s the senior vice president and chief information security officer at Palo Alto Networks. Niall has decades of experience in managing global security, compliance and risk management programs for financial institutions, cloud providers and technology services companies. He’s on Google’s CISO advisory board.

This episode of Business Lab is produced in association with Palo Alto Networks.

Welcome, Niall.

Niall Browne: Excellent. Thank you, Laurel, for having me.

Laurel: So as a chief information security officer, or a CISO, you’re responsible for securing both Palo Alto Networks’ products and the company itself. But you’re not securing just any old company; you’re securing a security company that secures other companies. How is that different?

Niall: Yes, so I think, the beautiful thing about Palo Alto Networks is that we’re the largest cybersecurity company in the world. So we really get to see what an awful lot of companies never get to see. And if you think about it, one of the key things is, knowledge is power. So the more you know about your adversaries, what are they doing, what methods they’re attempting on the network, what are the controls that work and what are the controls that don’t work, the better you are to create your own internal strategy to help protect against those continuous attacks. And you’re in a much better position to be able to provide that data to the board so they can assure that the appropriate oversight is in place.

So certainly for us, with that level of knowledge of what we get to see in our networks, that really gives us the opportunity to continuously innovate. So taking our products and continuously building on those, so we can meet the customer requirements and then the industry requirements. So I think that’s probably the first part. The second part is, we’re really in this boat together. So part of my job is continuously talking to individuals in the industry and fellow CISOs, CTOs, CIOs, and CEOs talking about cybersecurity strategy. And invariably, you’ll find the same issues that they’re having are the exact same issues that we’re having. So for us, it’s really the opportunity to share, how do we ensure that we are able to continuously innovate, make a difference in the industry and really collaborate on an ongoing basis with industry leaders. Especially focusing on how we secure our business and provide best practices as to how we companies can be more secure?

Laurel: So some people may be surprised that collaboration and this kind of open sharing of knowledge is so prevalent, but they shouldn’t be, right? Because how else are you going to all collectively defend against the unknown attackers?

Niall: Great question. And if you look at it on the opposite side of the fence, hackers are continuously sharing. Albeit they’re sharing for financial gain. In other words, they’ll steal data and they’ll resell it and resell it and resell it and resell it. Hackers are continuously sharing that data, including DIY toolkits. And on the security side of the house, there’s always been historically that legacy suspicion. In other words, I’m the only person who’s having this problem uniquely. And if I share this problem, they’ll think that I’m not doing a good job or the company isn’t doing a good job, or I’m the only person who’s having this specific issue. And what happened over time is, CISOs didn’t share a lot of data, which means the hackers were sharing data right left and center. But on the CISO side of the house, on the protection side, there was very little collaboration, which meant that now you had limited shared industry best practices.

Each CISO was in their own silo, in their own pillar, doing their own unique thing, and everybody was learning from their own mistakes. So it was really a one-to-one model. You make a mistake and then you make another mistake, and then you make another mistake. However, if you could talk to your peer, imagine in business or finance, you’re continuously talking to the CTO and the CFO to say, “Oh, by the way, how did you manage such and such issue?” So I’m now seeing the industry starting to change. CISOs are now starting to change, and share. They’re continuously talking about strategy. They’re continually talking about how do they protect their environment? They’re talking about, what are some of the good business models that work?

And if you look at MIT, there’s industry and technical and business models that really work in other industries. But then, if you look in the CISO community itself, it’s like, what are those industry best practices? And now they’re only starting to get kind of formulated up, bubble up from there. And what I’m seeing, certainly over the last, I would say three or four years, there’s a tremendous growth on the CISOs in relation to learning industry best practices, and really uplevelling their skillset. So they’re just not that technical geek in the corner. They really need to be able to talk business technology, be able to talk business terms, and really be able to be seen as that close peer to that CTO, to the CIO, to the CEO in relation to solving business problems.

Because if you think about it from a cybersecurity perspective, at the end of the day, it’s just a business problem. And if it’s a business problem, you need to apply strategic business solutions to solving those issues. Instead of talking about what version of antivirus you’re on, you really need to uplevel the conversation, so that, when you speak to the board, when you’re speaking to the same C-level executive, they’re not throwing their eyes in the air. They understand that you’re talking the same business language as them. Which means, again, if you’re a trusted business partner, then you can make a huge amount more difference in the company, as opposed to being seen as that junior IT leader in the organization that somebody only ever comes to if we get hacked or if a backup fails, or if a Mac is broken.

Laurel: I really like that analogy…growth of the position itself. Like you said, it does actually elevate this role to the board table because it is a business problem with a possible business solution. But how can boards then in return make better decisions? You will then also have to bring some data and information and something to help the board along with all of the other decisions they have to make across the entire company.

Niall: And that’s the key thing, is that most people, when they look at it, it’s classic sales. You can have the best salesperson in the business, but unless they have the close, and the close is the ask. Here’s a great product, and I want to sell this product, i.e., this car for, let’s say, $50,000. And then at the end of the sales pitch, will you buy the car? And that’s what makes a really good salesperson, the person that has the pitch to close. They have the ability to close the deal. So they ask for something. So I think for ages, CISOs had two big issues with the board. One is, they weren’t able to report the right data up to the board and speak the same language where the board would be able to understand what the issues were.

And then two, there was no ask. And that’s very important because if you go into a board and you present and everybody’s nodding and shaking their head and understanding it, sure you’ve updated them, but the security posture is none the better. And if you look at a classical board, any board itself, they’re there at a very, very high level, obviously, to serve the company. So any of the board members or any of the boards that I’ve worked with in the past, they have been extremely willing to help the business itself. So they’re always looking at, “Well, you presented X, but now, how can I help?” So I think CISOs need to flip it into more of being that salesperson with the close. Most importantly, what’s my ask?

And a classic board meeting, I think that goes well, is, you sit down, you work with the board, you show a core set of metrics. Now, you don’t want to show metrics on numbers that are absolutely meaningless to the board. If you look at the board, the board has a wide range of skill sets. Some board members may be compliance experts, some may be business leaders, some may be finance leaders. So it’s really about when you communicate with the board, two sets of things. One is coming up with a set of communications or metrics, and really outlining the business case so that anybody can read a board deck, and within minutes they understand what you’re trying to get across. That’s critical.

And then a second part is, it’s not a presentation. Every board meeting should end with time at the end for questions and answers and for the ask. And I would say, a good board meeting is whereby you don’t even go through the deck. You share the deck in advance, they’ve read through it, they were able to understand your cybersecurity posture by just looking at your deck. And then the board meeting doesn’t even refer to the deck. It’s a simple set of questions, comments back and forth and then the ask. And the ask could be, “Listen, can we get some more focus on a certain area itself or more resources?” Or they may have an ask of you as well. So again, I think the model really is, communicate a core set of data and then making it a conversation with a collaborative ask from both sides versus coming up with a 30-slide deck that nobody understands that you present it and then you run out of the board meeting from there. That model just doesn’t work, as we know.

Laurel: Yeah. Not for anyone, right? So what specific metrics do you actually report back to the board and why are those metrics important to your board or any other board?

Niall: The issue with any industry, including cybersecurity is, sometimes there’s just too much data. So, if you look at industry standards like ISO 27001, you may have a hundred and something controls. If you look at FedRAMP, you’ve got 300 something controls. If you look at COSO or COBIT. So you don’t want to go to the board with, “By the way, here’s 2,000 controls. And here’s how we’re in compliance with these 2,000 controls.” Because for the most part, the data is there, but there’s no context behind it. So they’re wondering, like, “AV being on 95% of end points, is that good? We scan once every, let’s say 12 hours, is that good?” So they’re what I call meaningless metrics. They have no benefit whatsoever for most InfoSec people, never mind board-level leaders. So from our point of view, we break it into simple core sets of pillars that we can measure over time.

And generally, you don’t want to have a set of pillars that’s 25 pillars, because that’s too many because you’re not able to measure one versus 25. So internally, we generally settle in about five major core areas that we focus in on and we measure against those each time. So one is, secure our products. Most organizations are very, very product-centric now. So products in most companies are becoming critical, critical, critical. So one thing we measure is how are we measuring? How are we protecting our products? And we rate ourselves on a scale of zero up to five being maximum maturity.

Now, if you have really good products, but they’re sitting on infrastructure that’s insecure, you have an issue. So the second one is, secure our infrastructure. And the third one is detection and response. So that if you’ve got really secure products on really secure infrastructure, but nobody’s looking at it and nobody’s measuring or monitoring the environment for attacks, then you have an issue. So for us, it’s detection response is the third one, which is critical.

The fourth one then is people. And the people component, it’s absolutely…I can’t stress this enough because if you don’t have people that understand cybersecurity, then you’ve got a core issue. The vast majority of times, it’s people that do something in a company accidentally, i.e., they may click on a phishing link that compromises your network. So one thing, what we call it is street smart. So one of the four pillars is, can we get people so they’re street smart? In other words, cybersecurity smart, street smart. So if they’re walking down the road and they see a stranger look suspicious, well use your gut. Same thing with cybersecurity. What are the simple things that they should do or think about on a day-to-day basis that they can protect a company?

And then the fifth one really is governance. How do we do governance and how do we manage ourselves? And how do we measure our success? So if you look at it there, it’s five simple pillars. It’s just simply product, infrastructure, detection response, people, and governance. And we measure zero to five for each of those. So then it’s very easy for the board and for other members to look at, How are we trending against those areas over time? It allows you to go high, in other words, the thousand-foot view. And then if there’s a question of infrastructure, you can look at the measurement, the infrastructure pillar, and then you can start jumping into other metrics later if they want. But really, that’s the way we articulate that, how we built our security program. And that’s something that I think that resonates very strongly with the board, because now they’re able to measure us based on known entities versus meaningless metrics that for the most part tell them nothing.

Laurel: Now, what if we switched that though? What kind of responsibility does the board have to be “street smart” and have some kind of foundational understanding of cybersecurity? Or do you take that on as your own personal responsibility to spend time with each member to make sure they understand the foundations?

Niall: Correct. So for us, it’s very much a case of taking a certain level of knowledge and then building on that knowledge so at least everybody’s on the same level of knowledge. So one example is, again, you could have somebody who’s chairing that audit committee, who’s very, very technical or very, very compliance driven. And she or he may know all about boards…audits and all the frameworks. And that’s great. And then the other side, you might have somebody who’s more finance-based or more audit-based. And then the question is, how do you work on uplevelling everybody’s skillset?

And there’s numerous different ways of doing that. It’s two things. One is sitting down with them one-on-one and then providing an uplevel of conversation on, this is what we’re doing. This is our entire security program. This is how it works. This is what 2020 looked like. This is what 2021 looks like…so getting everybody onto the same level and building that relationship is very, very important.

And we continuously see that whereby our board members will reach out towards us or we’ll reach out to them in sharing data, or they’ll have an idea that we haven’t thought about and we’ll say, “Well, that’s a really good idea. Let’s incorporate that into our program.” So I think that’s very useful. And then the second part is, it’s all about telling a story. So a story and a narrative. So if you open up a book and you start at the security side and you start at the end chapter, well, that’s not very compelling. It’s like, who’s Jane? Who’s Judy? Who’s Tim? Who’s Tony? Doesn’t make any sense whatsoever.

And oftentimes, that’s what happens in cybersecurity reports is that the board is looking at…and here’s she or he that’s presenting as a CISO and they’re presenting a set of data and metrics that they don’t understand and so therefore, they can’t do anything with that. So we spend a lot of time, our first board, starting off with a basic set of principles and then each board after that, every three months or so we go into more detail incrementally, as we’re growing and as we’re building that cybersecurity deck, they get to better understand and uplevel their understanding as well. And then from their side, with that level of understanding, they can very easily jump in and say, “Oh, by the way, here’s an area I think you should be focusing in on.”

And on our board, we have some VC firms, obviously, that are highly technical and they’ll have a slant that they’ll want us to focus in on. I want to say, “Sure, let’s incorporate that as part of our program.” So I think I would see this as board communication as a very much back and forth communication. It shouldn’t happen once a quarter. It should not happen on a daily basis, but certainly it should happen throughout the quarter whereby a board member has an idea and then you can incorporate that as part of your best practices.

Now, at the same time, you want the staff within that company to be able to operationally run their security team. But certainly, the insights some board member can provide, in some cases are tremendously because they’ve been in that industry for numerous different years. And as part of that model, they would typically have seen what other individuals have never seen before. Plus, I think what’s mostly beneficial from there, in cybersecurity, cybersecurity, again, it’s a business problem and it’s a business process. So most of these board members are exceptional at solving business practices. Maybe not cybersecurity, but they can take a cybersecurity issue and they can relate that to another business best practices, and then leverage that one in cybersecurity.

And frankly, I think that’s the best value a board can provide. Many times the CISO and the security team may not be able to see the wood from the trees because they’re so involved in it. For the board members, it’s a great kind of prism whereby they can look at it from the outside in, and they can provide insight based on, “Well, hang on a second, the way you’re solving this issue based in cybersecurity by doing a consulting model, that doesn’t work or that doesn’t scale. Instead, you should do a one-to-many model, i.e., fix the problem once and then it’s shared amongst all your constituents, the same as cloud does, software as a service does.” So that business slant, business perspective, I think is something that I really enjoy working with a board with, sharing some ideas and then collaborating back and forth. Because again, I think their business acumen is second to none. And if you can simply position cybersecurity as being a business issue, then you can really build a very strong increase of a collaborative environment really quickly.

Laurel: So speaking of your own uplevelling or upskilling, when did you first recognize that attack surface management was a separate new discipline that you needed to become really familiar with, educate your board on and then help staff it and plan for it?

Niall: Good question. I think if I look at ASM, or attack surface management, that’s probably one of the areas that CISOs focus least on and yet is the most important. And the reason for that is, if you look at any hacker, if a hacker wants to compromise your environment, the first thing that they will do is to first get to know your environment. So an example is, if you have a burglar, once they break into a housing estate, she or he will often wander around the housing estate, take a look, which are the houses that have the bins out, which ones have the ground floor windows that are open, which ones have no lights on the front of the house, which one has the dog barking?

So you wander by. Simply all you’re doing is a recon. A quick walk by 20 houses in a housing estate. You pick out the two. Now you’ve got two targets. Then you come back later on in the night or you come back tomorrow evening and then you break into those two. Done. And again, you’re looking at the way different industries do it. It’s fascinating because if you look at one industry, i.e., physical security and then you apply cybersecurity or you apply it to the board, oftentimes there’s a huge amount of similarity. And the same thing with cybersecurity is, if a company wants to compromise your environment, there’s two ways it will generally happen. One is, they’re generally doing a network scan and they look at your company and they find you have weak security. And then they turn their head back and they’re like, “Oh, interesting, a back door is open. I’m going to focus in on this company.”

Or else two, same thing as well, they’re doing a recon but they already know who you are. And in this case, they want to learn as much as possible so they can compromise you deep within your network. So, before you do any hacking of the environment, the recon component is the most critical part. Otherwise, you’re a bull in a china shop. You’re rushing in, you’re knocking off sensors, right, left and center. You shouldn’t be going in the front door, you should be going in the back door. So the recon component on that is critical, critical, critical.

Now, if you ask most CISOs when was the last time they reconned their own company, the vast majority will say, “I have no idea whatsoever.” So they may say, “Well, we use a security scanner.” But if you look at a security scanner, what you do is you go to the security scanner, you’ve put in a set of known IP addresses that you know about and you scan against those IP addresses. But if you look at that, that’s the tip of the iceberg, because what does the new industry model look like? It’s fluid. Gone are the days of cybersecurity would stand up a fire wall and it wouldn’t allow traffic through the firewall.

Now everything is extremely dynamic. Everything is internet facing. So now you’ve got Kubernetes, you’ve got people spinning up tens of thousands of containers with their own external IP addresses. They’re all accessible from the internet. You’ve got dev doing it, stage doing it. You’ve got all of the different environments coming. And now your attack surface every single minute of every single day changes. Some of it is, because it’s genuine. You’re allowing an IP address that’s out there because there’s a legitimate business reason, but oftentimes what will happen is, people will spin up the environment and suddenly it’s exposed to the internet.

Does the security team know about it? Likley no, and the CISO has no idea about it. So the ability, whereby you get to know, you get to recon your environment or the ASM, or attack surface management, is absolutely critical. Because if you don’t know it, you can’t protect it. And then the issue is, you could spin up an IP address in GCP or AWS or Alibaba. It could be on-prem, everybody’s now working from home. So my laptop could be exposed from the internet. And if you look at it, what always happens in virtually every single attack, well for the most part from the hosting, it starts on the outside and works its way in. So you really need to know your attack surface. You need to be scanning it every single day. You need to be able to attribute what are the IP addresses and devices that are exposed.

Simple example is, if you look at the last number of breaches that occurred, it’s simple stuff. Most times, it’s a cluster that was exposed from the internet, or somebody allowed like a transport administration shell like SSH or RDP from the internet, or somebody got a Kubernetes cluster and exposed it from the internet. In each of these cases, it’s just humans making accidental mistakes. But oftentimes, those IP addresses could be exposed to the internet for minutes, for days, for years, and security never gets to know about it, or protect against it. But at the same time, the hacker knows because they’re doing their job, they’re doing the recon continuously. And that’s where I’m seeing that this issue that’s been around for years of, “How do I know what’s exposed to the internet?” now it’s being defined. It’s attack surface management. What’s my outside-in view?

So for the first time ever cybersecurity are starting to…they knew there was a problem for ages, but they weren’t able to articulate what the problem was, never mind what the solution was. And now I’m seeing the kind of shift that, certainly in the last year or two, people were saying, “This is not a problem whereby I can look at it and say, yeah, it’s a problem.” Now, you’ve got to shift from this problem idolization to, “Hey, we’ve got to go fix this.” Because that’s how the hackers are getting in. And now I’m seeing people saying, “Let’s start fixing this.” And I think going forward, you’re going to have attack surface management be one of the most critical components of any CISO and their organization. If not, then they will get owned. They will get compromised and it will have a devastating impact to their business.

Laurel: So speaking of that and how the board understands attack surface management, most IT employees are going to take the path of, like you said, ease and expediency. They’re spinning up Kubernetes and servers and cloud instances and whatever it may be, because they just need to get the job done. Why is that, when you have a global company, such a problem with, or I should say, an opportunity to solve when you go through other business necessities, like a merger and acquisition, where you may have two companies coming together and you think you know where all the servers are, but in fact, a company grows and changes every single day. And that may not be the last count, the last reliable count. Why is that a concern for CISOs and the board?

Niall: So I think about this as two ways. One is, know the attack surface of your own company. And then, two, for any of your acquisitions, before you acquire them, you need to know what their attack surface is as well. So if you ask 99% of CISOs, “Tell me about my attack surface.” They won’t have the data to do that. So give you an example, in Palo Alto Networks, we use Xpanse. And the way that works is there’s four main phases I think about in attack surface management. And this applies to whenever you’re acquiring a company or you’ve integrated in the last 10 years within your organization.

And the first part is, is continuous discovery. So you’ve got to have the ability—and that’s why we use Xpanse—to continuously scan 24 by 7 by 365, every single IP address in the internet to work out what IP addresses, what ports are open. So, first of all, you’ve got to know all of the IP addresses and the ports on the internet. The issue there, that’s fine, but it’s not really going to give you much. So what’s the difference between the IP address in Palo Alto Networks and the IP address of Acme, especially when it changes every single minute? Because everything is dynamic, everything changes continuously on the internet.

So the second part really for us is the attribution. So everything is scanned. We do attribution. So we start looking at every single IP address, every single service, every single user in the internet to look at for those users themselves, are they Palo Alto Networks users or Palo Alto Networks devices or networks? Very critical because that, we’re able to see at any time, if somebody plugs in a laptop, in London, we’re able to get attribution that that’s one of our devices and networks. And if that network and device opens up RDP, a remote shell from the internet, then that’s an issue. Or if somebody spins up a network that we have no idea what it is, and it’s got (personally identifiable information) PII or healthcare data, that would be devastating for us for our business. So we spend a lot of time using the tools, such as Xpanse, for the attribution component there.

Third component we look at, now you know the IP addresses and services and you know which ones are Palo Alto Networks. Next, after that, there’s varying risk levels. If somebody opens something from the internet that’s a web server and it’s communicating using encryption using SSL and it’s well-patched, then, for the most part, the risk in that case is probably one out of 10. But then, if you’ve got another IP address that was spun up and it’s allowing an internal engineering tool that was accidentally exposed to the internet that has access to your cloud environments and it’s not patched. And oftentimes it’s not. Because when you look at tools that are exposed accidentally, they’re not managed because if they were managed in the first place, they wouldn’t be exposed to the internet.

So for us, really, the model is what’s the risk level of every single IP address and every single service? And we can then focus in on the ones that they’re eight or nine out of 10. On a daily basis or on an hourly basis, we can go fix those. But oftentimes again, it’s a case of, if they’re exposed to the internet, they’re exposed, they’re not patched, they’re not managed. They’re accidentally exposed.

And then the final one we focus in on, the problem now is, here’s a problem with scale. You’re not talking about three IP addresses or four IP addresses. You could be talking about 40,000 IP addresses, 400,000 IP addresses. And then suddenly tomorrow, it’s 500,000. Then it goes down to 350,009 IP addresses. So, because of the scale of the issue, and because over time more and more things will be internet-facing, the only way to solve this is through automation. No doubt whatsoever that the issue of an alert being generated, and somebody from the security operations center (SOC) jumping in, looking at that IP address, looking at the service, just doesn’t work.

So what needs to happen is, everything needs to be automated. Everything from the scanning perspective to the attribution components, what’s the risk of that IP address? So now, instead of you’ve got 500,000 IP addresses, and now you’re focusing in on three IP addresses that suddenly popped up there, one is like an SSA server. One could be like a telnet server, another could be an engineering tool. And then, from the automation layer, you want to build automation into the service whereby that service is automatically remediated, whether it’s patched or whether it’s taken offline.

And if you look at that entire chain, it’s the reverse of what the hacker is doing. The hacker is, they’re doing the recon, and then they’re breaking into that server so as to compromise your environment. You’re starting the same position as they are, where you should be. You should start with your attack surface, your recon. And after that, then you’re looking at your risk. You’re looking at the patching, you’re looking at taking it offline. You’re looking at automation. So I firmly believe, if you look at, with the drive towards the cloud, people working from home, this concept of perimeter has been gone for 10 years. It’s been gone for 10 years. But cybersecurity has been hanging on it and saying, “Well, there’s still a perimeter.” There isn’t.

So now they see every single device that’s on the internet. That’s its own perimeter. The device, the network, whatever else it is. And really, I think one of the certainly the driving factors, if everything is on the internet, if everything is online, if everything is always communicating, if everything is dynamically changing, you have to have a cybersecurity program that has the ability to know, tell me every single device that’s on the network, on the internet, what’s its risk level? And then for those that hit a certain risk level, either take it offline and apply controls. And by the way, you’ve got to do it 24 by 7 by 365, no humans involved. You’ve got to do that because of the scale of the issue. If you have a person that’s involved as part of that process, then you are going to fail. You are going to fail. Hence us leveraging tools like Xpanse to find and then fix those issues.

Laurel: Yeah. Technology is scalable, but humans are not. Right?

Niall: Exactly.

Laurel: Well, Niall, I appreciate this conversation today. It’s been absolutely fascinating and it’s given us so much to think about. So thank you for joining us today on the Business Lab.

Niall: Thank you very much for the invitation. I really enjoyed the conversation.

Laurel: That was Niall Browne, the chief information security officer at Palo Alto Networks, who I spoke with from Cambridge, Massachusetts, the home of MIT and MIT Technology Review, overlooking the Charles River.

That’s it for this episode of Business Lab. I’m your host, Laurel Ruma. I’m the director of Insights, the custom publishing division of MIT Technology Review. We were founded in 1899 at the Massachusetts Institute of Technology. And you can find us in print, on the web, and at dozens of events each year around the world.

For more information about us and the show, please check out our website at

The show is available wherever you get your podcasts.

If you enjoyed this episode, we hope you’ll take a moment to rate and review us.

Business Lab is a production of MIT Technology Review.

This episode was produced by Collective Next.

Thanks for listening.

This podcast episode was produced by Insights, the custom content arm of MIT Technology Review. It was not produced by MIT Technology Review’s editorial staff.


AI and data fuel innovation in clinical trials and beyond



AI and data fuel innovation in clinical trials and beyond

Laurel: So mentioning the pandemic, it really has shown us how critical and fraught the race is to provide new treatments and vaccines to patients. Could you explain what evidence generation is and then how it fits into drug development?

Arnaub: Sure. So as a concept, generating evidence in drug development is nothing new. It’s the art of putting together data and analyses that successfully demonstrate the safety and the efficacy and the value of your product to a bunch of different stakeholders, regulators, payers, providers, and ultimately, and most importantly, patients. And to date, I’d say evidence generation consists of not only the trial readout itself, but there are now different types of studies that pharmaceutical or medical device companies conduct, and these could be studies like literature reviews or observational data studies or analyses that demonstrate the burden of illness or even treatment patterns. And if you look at how most companies are designed, clinical development teams focus on designing a protocol, executing the trial, and they’re responsible for a successful readout in the trial. And most of that work happens within clinical dev. But as a drug gets closer to launch, health economics, outcomes research, epidemiology teams are the ones that are helping paint what is the value and how do we understand the disease more effectively?

So I think we’re at a pretty interesting inflection point in the industry right now. Generating evidence is a multi-year activity, both during the trial and in many cases long after the trial. And we saw this as especially true for vaccine trials, but also for oncology or other therapeutic areas. In covid, the vaccine companies put together their evidence packages in record time, and it was an incredible effort. And now I think what’s happening is the FDA’s navigating a tricky balance where they want to promote the innovation that we were talking about, the advancements of new therapies to patients. They’ve built in vehicles to expedite therapies such as accelerated approvals, but we need confirmatory trials or long-term follow up to really understand the evidence and to understand the safety and the efficacy of these drugs. And that’s why that concept that we’re talking about today is so important, is how do we do this more expeditiously?

Laurel: It’s certainly important when you’re talking about something that is life-saving innovations, but as you mentioned earlier, with the coming together of both the rapid pace of technology innovation as well as the data being generated and reviewed, we’re at a special inflection point here. So, how has data and evidence generation evolved in the last couple years, and then how different would this ability to create a vaccine and all the evidence packets now be possible five or 10 years ago?

Arnaub: It’s important to set the distinction here between clinical trial data and what’s called real-world data. The randomized controlled trial is, and has remained, the gold standard for evidence generation and submission. And we know within clinical trials, we have a really tightly controlled set of parameters and a focus on a subset of patients. And there’s a lot of specificity and granularity in what’s being captured. There’s a regular interval of assessment, but we also know the trial environment is not necessarily representative of how patients end up performing in the real world. And that term, “real world,” is kind of a wild west of a bunch of different things. It’s claims data or billing records from insurance companies. It’s electronic medical records that emerge out of providers and hospital systems and labs, and even increasingly new forms of data that you might see from devices or even patient-reported data. And RWD, or real-world data, is a large and diverse set of different sources that can capture patient performance as patients go in and out of different healthcare systems and environments.

Ten years ago, when I was first working in this space, the term “real-world data” didn’t even exist. It was like a swear word, and it was basically one that was created in recent years by the pharmaceutical and the regulatory sectors. So, I think what we’re seeing now, the other important piece or dimension is that the regulatory agencies, through very important pieces of legislation like the 21st Century Cures Act, have jump-started and propelled how real-world data can be used and incorporated to augment our understanding of treatments and of disease. So, there’s a lot of momentum here. Real-world data is used in 85%, 90% of FDA-approved new drug applications. So, this is a world we have to navigate.

How do we keep the rigor of the clinical trial and tell the entire story, and then how do we bring in the real-world data to kind of complete that picture? It’s a problem we’ve been focusing on for the last two years, and we’ve even built a solution around this during covid called Medidata Link that actually ties together patient-level data in the clinical trial to all the non-trial data that exists in the world for the individual patient. And as you can imagine, the reason this made a lot of sense during covid, and we actually started this with a covid vaccine manufacturer, was so that we could study long-term outcomes, so that we could tie together that trial data to what we’re seeing post-trial. And does the vaccine make sense over the long term? Is it safe? Is it efficacious? And this is, I think, something that’s going to emerge and has been a big part of our evolution over the last couple years in terms of how we collect data.

Laurel: That collecting data story is certainly part of maybe the challenges in generating this high-quality evidence. What are some other gaps in the industry that you have seen?

Arnaub: I think the elephant in the room for development in the pharmaceutical industry is that despite all the data and all of the advances in analytics, the probability of technical success, or regulatory success as it’s called for drugs, moving forward is still really low. The overall likelihood of approval from phase one consistently sits under 10% for a number of different therapeutic areas. It’s sub 5% in cardiovascular, it’s a little bit over 5% in oncology and neurology, and I think what underlies these failures is a lack of data to demonstrate efficacy. It’s where a lot of companies submit or include what the regulatory bodies call a flawed study design, an inappropriate statistical endpoint, or in many cases, trials are underpowered, meaning the sample size was too small to reject the null hypothesis. So what that means is you’re grappling with a number of key decisions if you look at just the trial itself and some of the gaps where data should be more involved and more influential in decision making.

So, when you’re designing a trial, you’re evaluating, “What are my primary and my secondary endpoints? What inclusion or exclusion criteria do I select? What’s my comparator? What’s my use of a biomarker? And then how do I understand outcomes? How do I understand the mechanism of action?” It’s a myriad of different choices and a permutation of different decisions that have to be made in parallel, all of this data and information coming from the real world; we talked about the momentum in how valuable an electronic health record could be. But the gap here, the problem is, how is the data collected? How do you verify where it came from? Can it be trusted?

So, while volume is good, the gaps actually contribute and there’s a significant chance of bias in a variety of different areas. Selection bias, meaning there’s differences in the types of patients who you select for treatment. There’s performance bias, detection, a number of issues with the data itself. So, I think what we’re trying to navigate here is how can you do this in a robust way where you’re putting these data sets together, addressing some of those key issues around drug failure that I was referencing earlier? Our personal approach has been using a curated historical clinical trial data set that sits on our platform and use that to contextualize what we’re seeing in the real world and to better understand how patients are responding to therapy. And that should, in theory, and what we’ve seen with our work, is help clinical development teams use a novel way to use data to design a trial protocol, or to improve some of the statistical analysis work that they do.

Continue Reading


Power beaming comes of age



Power beaming comes of age

The global need for power to provide ubiquitous connectivity through 5G, 6G, and smart infrastructure is rising. This report explains the prospects of power beaming; its economic, human, and environmental implications; and the challenges of making the technology reliable, effective, wide-ranging, and secure.

The following are the report’s key findings:

Lasers and microwaves offer distinct approaches to power beaming, each with benefits and drawbacks. While microwave-based power beaming has a more established track record thanks to lower cost of equipment, laser-based approaches are showing promise, backed by an increasing flurry of successful trials and pilots. Laser-based beaming has high-impact prospects for powering equipment in remote sites, the low-earth orbit economy, electric transportation, and underwater applications. Lasers’ chief advantage is the narrow concentration of beams, which enables smaller trans- mission and receiver installations. On the other hand, their disadvantage is the disturbance caused by atmospheric conditions and human interruption, although there are ongoing efforts to tackle these deficits.

Power beaming could quicken energy decarbonization, boost internet connectivity, and enable post-disaster response. Climate change is spurring investment in power beaming, which can support more radical approaches to energy transition. Due to solar energy’s continuous availability, beaming it directly from space to Earth offers superior conversion compared to land-based solar panels when averaged over time. Electric transportation—from trains to planes or drones—benefits from power beaming by avoiding the disruption and costs caused by cabling, wiring, or recharge landings.

Beaming could also transfer power from remote renewables sites such as offshore wind farms. Other areas where power beaming could revolutionize energy solutions include refueling space missions and satellites, 5G provision, and post-disaster humanitarian response in remote regions or areas where networks have collapsed due to extreme weather events, whose frequency will be increased by climate change. In the short term, as efficiencies continue to improve, power beaming has the capacity to reduce the number of wasted batteries, especially in low-power, across-the- room applications.

Public engagement and education are crucial to support the uptake of power beaming. Lasers and microwaves may conjure images of death rays and unanticipated health risks. Public backlash against 5G shows the importance of education and information about the safety of new, “invisible” technologies. Based on decades of research, power beaming via both microwaves and lasers has been shown to be safe. The public is comfortable living amidst invisible forces like wi-fi and wireless data transfer; power beaming is simply the newest chapter.

Commercial investment in power beaming remains muted due to a combination of historical skepticism and uncertain time horizons. While private investment in futuristic sectors like nuclear fusion energy and satellites booms, the power-beaming sector has received relatively little investment and venture capital relative to the scale of the opportunity. Experts believe this is partly a “first-mover” problem as capital allocators await signs of momentum. It may be a hangover of past decisions to abandon beaming due to high costs and impracticality, even though such reticence was based on earlier technologies that have now been surpassed. Power beaming also tends to fall between two R&D comfort zones for large corporations: it does not deliver short-term financial gain, but it is also not long term enough to justify a steady financing stream.

Download the full report.

This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff.

Continue Reading


The porcelain challenge didn’t need to be real to get views



The porcelain challenge didn’t need to be real to get views

“I’ve dabbled in the past with trying to make fake news that is transparent about being fake but spreads nonetheless,” Durfee said. (He once, with a surprising amount of success, got a false rumor started that longtime YouTuber Hank Green had been arrested as a teenager for trying to steal a lemur from a zoo.)

On Sunday, Durfee and his friends watched as #PorcelainChallenge gained traction, and they celebrated when it generated its first media headline (“TikTok’s porcelain challenge is not real but it’s not something to joke about either”). A steady parade of other headlines, some more credulous than others, followed. 

But reflex-dependent viral content has a short life span. When Durfee and I chatted three days after he posted his first video about the porcelain challenge, he already could tell that it wasn’t going to catch as widely as he’d hoped. RIP. 

Nevertheless, viral moments can be reanimated with just the slightest touch of attention, becoming an undead trend ambling through Facebook news feeds and panicked parent groups. Stripping away their original context can only make them more powerful. And dubious claims about viral teen challenges are often these sorts of zombies—sometimes giving them a second life that’s much bigger (and arguably more dangerous) than the first.

For every “cinnamon challenge” (a real early-2010s viral challenge that made the YouTube rounds and put participants at risk for some nasty health complications), there are even more dumb ideas on the internet that do not trend until someone with a large audience of parents freaks out about them. 

Just a couple of weeks ago, for instance, the US Food and Drug Administration issued a warning about boiling chicken in NyQuil, prompting a panic over a craze that would endanger Gen Z lives in the name of views. Instead, as Buzzfeed News reported, the warning itself was the most viral thing about NyQuil chicken, spiking interest in a “trend” that was not trending.

And in 2018, there was the “condom challenge,” which gained widespread media coverage as the latest life-threatening thing teens were doing online for attention—“uncovered” because a local news station sat in on a presentation at a Texas school on the dangers teens face. In reality, the condom challenge had a few minor blips of interest online in 2007 and 2013, but videos of people actually trying to snort a condom up their nose were sparse. In each case, the fear of teens flocking en masse to take part in a dangerous challenge did more to amplify it to a much larger audience than the challenge was able to do on its own. 

The porcelain challenge has all the elements of future zombie content. Its catchy name stands out like a bite on the arm. The posts and videos seeded across social media by Durfee’s followers—and the secondary audience coming across the work of those Durfee deputized—are plausible and context-free. 

Continue Reading

Copyright © 2021 Seminole Press.