Incident Response Management

Conheça conteúdos de destaque no LinkedIn criados por especialistas.

  • Ver perfil de William "Craig" F.

    Craig Fugate Consulting

    12.695 seguidores

    Craig Fugate's Deadly Seven Sins of Emergency Management Former FEMA Administrator Craig Fugate, drawing from decades of disaster experience, identified critical flaws in how emergency management is often practiced. These "Deadly Seven Sins" serve as warnings against complacency, bureaucracy, and shortsighted planning. 1. We plan for what we are capable of responding to. Instead of preparing for catastrophic events, we often design plans around what systems can currently deliver. This guarantees failure when the event exceeds those limits. 2. We plan for our communities by placing the "too hard to do" in an annex. People with access and functional needs, children, the elderly, and pets are frequently sidelined into planning annexes—rather than being part of core planning. This marginalizes those who are often the most vulnerable. 3. We exercise to success. Too many drills are scripted to "go right." Real preparedness means stress-testing systems, embracing uncertainty, and discovering failure points. 4. We think our emergency response system can scale up from emergency to disaster. Emergency response systems don't automatically scale to meet catastrophic needs. Disasters break the system—they don’t just stress it. 5. We build our emergency management team around government, leaving out volunteer organizations, the private sector, and the public. A government-centric approach ignores the real capabilities of the Whole Community. Effective emergency management integrates all sectors. 6. We treat the public as a liability. Communities are seen as problems to manage, not partners in response. This mindset underestimates the resilience, resourcefulness, and critical role of the public. 7. We price risk too low to change behavior, and as a result, we continue to grow risk. Risk is underestimated in markets, policies, and development decisions. Without true pricing of risk, society continues to build vulnerability into the system. Takeaway: Avoiding these seven sins requires bold thinking, uncomfortable conversations, and a commitment to inclusive, realistic, and scalable preparedness. As Fugate often says: "Hope is not a plan."

  • Ver perfil de Andrew King

    CISO | Chief Information Security Officer | Incident Commander | Cyber Security SME | Global IT Executive | Executes strategies to strengthen security, build high-performing teams, and mitigate risk

    6.160 seguidores

    After spending the past year leading ransomware incident response, I wanted to share some insights that you should be thinking about in relation to your organization. 1. Leadership clarity is non-negotiable. Multiple executives giving competing directions doesn't just create confusion - it directly impacts your bottom line. Every minute of misaligned leadership translated into increased recovery costs and extended downtime. 2. Trust your IR experts. Yes, you know your environment inside and out. But incident response is their expertise. When you hire specialists, let them specialize. I've seen firsthand how second-guessing IR teams can derail recovery efforts. 3. Master the time paradox. Your success hinges on rapid containment while simultaneously extending threat actor negotiations. If your leadership and IR partnership aren't solid (points 1 & 2), this delicate balance falls apart. 4. Global password resets are deceptively complex. Every human account, service account, API key, and automated process needs rotation. Without robust asset management and IAM programs, this becomes a nightmare. You will discover dependencies that you didn't even know existed. 5. Visibility isn't just nice-to-have - it's survival. Modern security tools that provide comprehensive visibility across your environment aren't a luxury. This week reinforced that every blind spot extends your recovery time exponentially. 6. Data gaps become permanent mysteries. Without proper logging and monitoring, you might never uncover the initial access vector. It's sobering to realize that lack of visibility today means questions that can never be answered tomorrow. 7. Backup investment is incident insurance. Organizations regularly lose millions that could have been prevented with proper backup strategies. If you think good backups are expensive, wait until you see the cost of not having them. 8. Protect your team from burnout. Bring in additional help immediately - don't wait. Your core team needs to be there for the rebuild after the incident, and running them into the ground during response isn't worth it. Spending money on staff augmentation isn't just about handling the immediate crisis - it's about maintaining the institutional knowledge and expertise you'll need for recovery. Remember: the incident ends, but your team's journey continues long after. #Cybersecurity #IncidentResponse #CISO #RansomwareResponse #SecurityLeadership"

  • Ver perfil de Alexey Navolokin

    FOLLOW ME for breaking tech news & content • helping usher in tech 2.0 • at AMD for a reason w/ purpose • LinkedIn persona •

    778.383 seguidores

    How AI is changing storm response in the U.S. — technically. Have you experienced it? Extreme weather response is no longer driven by single forecasts. It’s driven by ensembles + AI acceleration + real-time data fusion. Here’s what’s happening under the hood: AI-accelerated Numerical Weather Prediction (NWP) Deep learning models (graph neural nets, transformers) are trained on decades of reanalysis data to approximate full physics-based solvers. Result: • Inference in seconds instead of hours • Enables rapid ensemble generation (hundreds of scenarios, not dozens) This allows forecasters to update storm tracks and intensity continuously, not on fixed cycles. Multi-modal data fusion AI ingests: • Satellite imagery (GOES) • Doppler radar volumes • Ocean buoys & atmospheric soundings • Ground IoT sensors • Historical climatology Models correlate spatial-temporal patterns across modalities — something classical models struggle with at scale. Severe weather nowcasting Computer vision models detect: • Convective initiation • Tornadic signatures • Rapid intensification signals Lead times improve by 30–60 minutes for fast-forming events — which is operationally massive for emergency management. Probabilistic forecasting, not single answers ML-driven ensembles output probability distributions, not deterministic paths: • Flood depth likelihoods • Wind gust exceedance • Ice accumulation risk This feeds directly into risk-based decision systems. Infrastructure impact modeling Utilities combine AI weather outputs with: • Grid topology • Asset age & failure history • Load forecasts This enables pre-storm optimization: • Crew pre-positioning • Targeted grid isolation • Faster restoration paths Operational decision intelligence AI systems now bridge forecast → action: • When to evacuate • Where to stage responders • Which assets fail first This is no longer meteorology alone — it’s real-time systems engineering. Storms are getting more chaotic. Our response is getting more computational. AI doesn’t replace physics. It compresses it into time we can actually use. #AI #WeatherModeling #Nowcasting #ClimateTech #InfrastructureAI #DigitalTwins #ResilienceEngineering #HPC

  • Ver perfil de Rachel Eigen, CSP, MISE, PhD Candidate

    Safety Leader & Visionary bridging the gap between compliance and culture

    1.369 seguidores

    STOP Calling “Training” the Root Cause. It’s Not. (And it’s costing companies real money + real lives.) One of the most common phrases I see in incident investigation reports? ➡️ “Root Cause: Lack of training.” Let me be blunt: Training is almost never the true root cause. It’s an easy answer. A convenient answer. But it’s not the right answer. If someone sits through the training… If someone can recite back the steps… If someone signed the sheet… …but the incident still happened, the problem isn’t training. Real root causes look like this: • A system that doesn’t reinforce critical behaviors. • Production pressure that rewards speed over safety. • Supervisors who were never trained to coach risk-based decision-making. • Broken communication loops between ops, maintenance, and safety. • Policies written for audits, not for real people. • Engineering controls that were never implemented because they “cost too much.” • A cultural norm of workaround > work-as-designed. These are the roots. Training is just a branch. Here’s the truth: When “training” becomes the default root cause, it lets the system off the hook. And if you’re blaming workers when the system is the real problem, you’re guaranteeing the incident will happen again. What high-performing organizations do instead: • Use human-factors thinking, not blame-based thinking • Ask why the environment allowed the error, not why the person made it • Evaluate workload, equipment design, conflicting priorities, and organizational signals • Treat workers as the source of insight, not the source of failure • Document root causes that leadership can actually act on — not ones that just check a box My challenge to every safety + operations leader: Next time an incident happens, don’t ask, ❌ “Who messed up?” or ❌ “Do they need more training?” Ask this instead: “What conditions set this event up to occur, and how do we eliminate them permanently?” That’s root cause. That’s prevention. That’s leadership.

  • Ver perfil de Alexandru Voica

    Public affairs | AI, social media, interactive entertainment, online retail, semiconductors, consumer electronics.

    7.384 seguidores

    When I was part of the Meta comms team, I was on the taskforce that handled communications during outages such as the one caused by CrowdStrike today (we called them SEVs, short for Site EVents). I personally led the crisis response on two big Meta outages, where multiple apps and services were down for several hours. Here is a summary of the comms playbook I helped build to manage outages. As soon as we became aware of the outage, I reached out to the incident manager oncall. This was usually a senior engineering manager or director in our Production Engineering team but in most companies you'd have a head of SRE or DevOps. Together, we'd figure out the answers to the following questions: 1. Is the outage local, regional or global? 2. When did the outage start and when do we expect to recover? 3. What customers (internal or external) are affected? 4. Is the entire platform down or is it just several features that are affected? 5. Have we had any complaints from major customers? 6. Is the outage the result of a DDoS or cybersecurity incident? 7. Have we lost any customer data? Using the information from the answers above, I drafted a holding statement acknowledging the issue and post it internally and externally. I also started a tracker to monitor press coverage and inquiries from reporters to ensure I could get back to everyone in a timely fashioned and also keep my finger on the pulse of the coverage (volume, sentiment, etc.) Then, as the engineering teams would start investigating and fixing the issue, I checked in with them every hour to get updates. If we had any major updates, I made changes to the holding statement and reshared it with the relevant stakeholders. At Meta, this was my colleagues in the communications team and reporters who were covering the story, but also company-specific stakeholders such as partnerships and marketing, sales, investor relations, legal and policy. That's because in the case of a major outage, public companies might have contractual responsibilities with customers or regulatory obligations to report incidents. We'd repeat the process until the issue was fixed, at which point we issued a final statement reiterating what had happened (configuration changes were the main culprit) and which services were affected. We also apologized for the inconvenience caused and confirmed the issue had been fixed. For outages that lasted for more than four to six hours, we also maintained an open communications channel with the senior leadership of the company to ensure they had visibility of our strategy and execution. Hope this helps other PR teams out there!

  • Ver perfil de Lora Vaughn

    2x CISO | Prevented Ransomware Deployment | Fractional CISO & Security Advisor | Post-Incident Advisory | Community Banks, Fintech & SaaS | FFIEC, GLBA, SOC 2, PCI | Speaker | vaughncybergroup.com

    11.172 seguidores

    Your incident response plan will fail. Not because it's bad. Because it assumes everything will go right. The right people will be available. The right information will be accessible. The attack will follow a neat, linear path you can respond to step by step. None of that happens in a real incident. I've been in rooms where the IR plan was beautiful on paper and useless the moment things went sideways. Key people were unreachable. Systems were down. Nobody knew who was making decisions. The plan said "contact the CISO" and the CISO was on a plane. The problem isn't the plan. It's the assumption that having one means you're ready. You don't build response capability by writing a document. You build it by drilling until decisions become muscle memory. Surprise tabletops. Decision frameworks that work when half the team is missing. Communication trees that don't depend on a single person. The plan is the starting point, not the finish line. #cybersecurity #incidentresponse #infosec #securityleadership

  • Ver perfil de Devin Marble

    Growth | Enterprise XR | Partnerships | Tedx Speaker | Podcaster

    5.058 seguidores

    This changes everything for EMT, Paramedic, tactical, and military training. VRpatients’ spatial passthrough feature is pushing the boundaries of what’s possible in immersive simulation. Here’s why it matters: ➤ Train anywhere, treat anyone. Place a patient avatar on a cot, in the field, an alley, a helicopter, or a battlefield and practice in the actual environments responders work in. ➤ Integrate physical skills with clinical decisions. Apply real tourniquets, perform needle decompression, draw and deliver meds, all while making time-sensitive, high-stakes decisions inside the headset. ➤ Close the realism gap. Passthrough eliminates the disconnect between virtual scenarios and hands-on skills. What you do in sim matches what you do in the field. Watch this video of immersive training in the field simulating a hit and run in a neighborhood: https://lnkd.in/gQ8N5aiJ We're not just simulating emergencies. We're preparing for them. If you're training tactical or field responders, let's talk. VRpatients #EMTTraining #MilitarySimulation #TacticalMedicine #ImmersiveLearning #VRinHealthcare #PublicHealthInnovation #SWATTraining #SimulationTraining #ClinicalEducation #VRpatients #DevinMarble

  • Ver perfil de Dr. Rashid Khan DBA

    Dr Safety n Emergency Management | UNDRR Member | TEDx Organiser n Speaker | Bestselling Author | Global Disaster Risk & Emergency Management Expert | Founder & CEO of Evacovation | Security Advisor | ISO 27001 Master

    25.412 seguidores

    The “mitigation gap” that keeps agencies reactive. The Deloitte-NEMA National Risk Study 2025 provides a candid look at the challenges facing state and territorial emergency management agencies across the US. The report's findings reveal an interconnected set of issues that are stretching resources and hindering the shift from response to prevention. The top challenge identified by the study is funding, with 64% of respondents citing it as their most significant barrier. This financial strain is directly linked to the crisis in workforce development; agencies struggle to retain and recruit skilled staff in a competitive market, driven by budget constraints and a shortage of qualified candidates (81% of respondents). Crucially, the study highlights a profound disconnect in focus: emergency managers want to spend more time on mitigation and preparedness, but are stuck in a constant cycle of response. On average, respondents prefer to spend 44% of their time on mitigation, but currently spend only 5%. This 39-point gap underscores the inability of agencies to proactively reduce risk. Furthermore, while agencies are eager to adopt advanced technology like AI and advanced risk modeling, 85% cite infrastructure limitations and procurement challenges as major hurdles. The challenge is clear: the expanding mandate of emergency management has outpaced available resources and technology. The key to future resilience lies in closing the gap between the desired focus on mitigation and the current reality of response—a task that demands strategic investment in workforce and modern technology. Share your opinion on this! #NationalRiskStudy #EmergencyManagement #Deloitte #NEMA #MitigationGap

  • Ver perfil de Shiv Kataria

    Mentor | Leader | Risk Governance | Incident Response | Cybersecurity, Operational Technology [views are personal]

    23.474 seguidores

    𝗖𝗜𝗦𝗔 𝗷𝘂𝘀𝘁 𝗱𝗿𝗼𝗽𝗽𝗲𝗱 𝘀𝗼𝗺𝗲𝘁𝗵𝗶𝗻𝗴 𝘂𝘀𝗲𝗳𝘂𝗹 𝗳𝗼𝗿 𝗿𝗲𝘀𝗽𝗼𝗻𝗱𝗲𝗿𝘀: an open-source Eviction Strategies Tool (built with MITRE) to help teams contain and evict adversaries—fast and in the right order. Why do you care?: During incidents, most delays come from sequencing—what to do first, what to isolate next, and how to avoid tipping off the adversary. This tool turns findings into a clear, defensible plan. What’s inside: 𝗖𝗢𝗨𝗡𝟳𝗘𝗥 – a library of atomic post-compromise countermeasures mapped to ATT&CK TTPs. 𝗣𝗹𝗮𝘆𝗯𝗼𝗼𝗸 𝗡𝗲𝘅𝘁𝗚𝗲𝗻 – match your incident notes (ATT&CK or free text) to recommended actions and auto-build an eviction plan. 𝗘𝘅𝗽𝗼𝗿𝘁𝘀 – JSON, Word, Excel, Markdown for quick sharing with IR, legal, and leadership. 𝗚𝗿𝗼𝘂𝗻𝗱𝗲𝗱 𝗶𝗻 𝘀𝘁𝗮𝗻𝗱𝗮𝗿𝗱𝘀 – built on ATT&CK and informed by D3FEND. 𝗗𝗲𝗽𝘁𝗵 – 100+ curated, researched actions. 𝗢𝗽𝗲𝗻 𝘀𝗼𝘂𝗿𝗰𝗲 – MIT license. How can you use this: 1. Feed in current IR findings (or map to ATT&CK). 2. Generate the eviction plan and sequence of actions. 3. Export to Word/Markdown for the war-room, assign owners, and track. 4. Rehearse in a tabletop; tune for your environment (IT/OT, cloud/on-prem). Add the playbook to your IR runbook and repeat after each hunt. A no-cost way to bring discipline and speed to remediation. Worth adding to your next tabletop and real-world playbooks. Link: https://lnkd.in/gMvrPnwU Cybersecurity and Infrastructure Security Agency Liked it ? Repost. #CISA #IncidentResponse #BlueTeam #MITREATTACK #D3FEND #OpenSource #Cybersecurity

  • Ver perfil de Mandy Andress
    Mandy Andress Mandy Andress é um Influencer

    CISO | Investor | Board Member | Advancing the Future of Innovation in Cybersecurity

    10.343 seguidores

    57% of major cyber incidents involve attack types teams never rehearsed. Too many tabletop exercises rely on familiar, dramatic attack scenarios... the kind people already expect. But the real danger is in what nobody saw coming: subtle lateral movement, quiet exfiltration, or chained compromises that don’t start with a big flash. To make exercises meaningful, they have to reflect your environment, your risks, your tech, your people. Teams should test contacting people, fallback comms, expired phone lists, even burner phone logistics. Those “mundane” failures often become the real showstoppers in a crisis. Real preparation is less about scripting a perfect drill and more about building adaptability, muscle memory for surprises, and resilience when chaos hits. #IncidentResponse #CyberReadiness #TabletopExercises

Conhecer categorias