Cisco Certified CyberOps Associate CBROPS 200-201 Exam Guide

Understanding Cisco Cybersecurity Operations Fundamentals v1.1 (200-201)

1.0 Security Concepts (20%)

This domain covers the foundational principles crucial to all cybersecurity efforts, starting with the CIA triad, which outlines the essential goals of security: Confidentiality, Integrity, and Availability. It progresses to explore various security deployments, detailing the differences and uses of network, endpoint, and application security systems. Special emphasis is placed on the evolution of security tools from traditional antivirus to sophisticated SIEM and SOAR solutions. The curriculum also includes a thorough glossary of security terms, from threat intelligence to zero trust frameworks, equipping candidates with a solid understanding of the terminology used in daily cybersecurity operations. It examines different security models and access control mechanisms, explaining how each model supports the overarching security infrastructure. Additionally, this section discusses the CVSS terms to aid in vulnerability assessment and mitigation planning. It also addresses challenges in maintaining visibility across different data environments, which is crucial for effective security detection and response.

  1. Describe the CIA triad: The CIA triad is a widely accepted security model that is fundamental to protecting data and systems. It stands for Confidentiality, Integrity, and Availability, each serving as a critical objective for comprehensive security strategies.

  2. Compare security deployments:

    • Network, endpoint, and application security systems: These systems protect different aspects of an IT infrastructure. Network security guards the data in transit, endpoint security focuses on devices interacting with the network, and application security protects software from threats.
    • Agentless and agent-based protections: Agent-based solutions involve software installed on the target asset to monitor and secure it, while agentless solutions manage and secure assets without installing software on them.
    • Legacy antivirus and antimalware: Traditional antivirus solutions are signature-based, often struggling with new or unknown threats, whereas antimalware encompasses broader threats including viruses, trojans, and spyware.
    • SIEM, SOAR, and log management: SIEM collects and aggregates logged data, SOAR integrates and automates response to threats, and log management involves the collection and analysis of computer-generated records for security.
    • Container and virtual environments: Security strategies for containerized and virtual environments must address their dynamic nature and potential vulnerabilities.
    • Cloud security deployments: Focuses on protecting data and applications in cloud environments, dealing with unique challenges such as multi-tenancy and loss of control over hardware.

  3. Describe security terms:

    • Threat intelligence (TI): Refers to data that is collected, processed, and analyzed to understand threat actors’ motives, targets, and attack behaviors.
    • Threat hunting: An active cybersecurity practice aimed at discovering malicious activities that have evaded detection by existing tools.
    • Malware analysis: Involves examining the program code, structures, and functionality of malicious software.
    • Threat actor: Individuals or groups responsible for an attack attempting to gain unauthorized access to systems.
    • Run book automation (RBA): The process of automating operations tasks based on a series of standard procedural patterns.
    • Reverse engineering: The practice of dismantling and examining existing technologies to understand their operations or to recreate something similar.
    • Sliding window anomaly detection: A method used in statistics and signal processing to detect anomalous behavior within dynamically changing data.
    • Principle of least privilege: Ensuring that any process, user, or program has only the minimum permissions necessary to perform its function.
    • Zero trust: A security concept centered on the belief that organizations should not automatically trust anything inside or outside its perimeters.
    • Threat intelligence platform (TIP): Tools that allow organizations to aggregate, compare, and analyze threat data in a centralized platform.
    • Threat modeling: The process of identifying, understanding, and communicating threats and mitigations within the context of protecting something of value.

  4. Compare security concepts:

    • Risk: Involves identifying potential threats and vulnerabilities, and assessing the impacts they may have if they were to occur.
    • Threat: Any circumstance or event that has the potential to adversely impact organizational operations through unauthorized access, destruction, or disclosure.
    • Vulnerability: Weaknesses in a system that can be exploited by threats to gain unauthorized access to an asset.
    • Exploit: A piece of software, a command, or a methodology that takes advantage of a vulnerability leading to unauthorized control or access.

  5. Describe the principles of the defense-in-depth strategy: This security approach uses multiple layers of security to protect the valuable data and information of an organization. If one layer fails, others still stand.

  6. Compare access control models:

    • Discretionary Access Control (DAC): Allows owners to set access controls for other users.
    • Mandatory Access Control (MAC): Controls are set by a central authority based on multiple levels of security.
    • Nondiscretionary Access Control: Access to resources is managed by settings configured by the system or security administrator.
    • Authentication, Authorization, Accounting (AAA): A framework to control access, enforce policies, audit usage, and provide the information necessary to bill for services.
    • Rule-based Access Control: Access decisions are based on a set of rules defined by the system administrator.
    • Time-based Access Control: Access to resources is granted or denied based on the current time.
    • Role-based Access Control (RBAC): Access to resources is based on the roles of individual users within an organization.
    • Attribute-based Access Control (ABAC): Decisions to deny or allow access are based on attributes of the user, the resource to be accessed, and current environmental conditions.

  7. Describe terms as defined in CVSS:

    • Attack vector: The means by which an attacker can gain access to a device or network to deliver a payload or malicious outcome.
    • Attack complexity: Refers to the level of effort and resources required to carry out an attack.
    • Privileges required: Indicates the level of privileges an attacker must possess to successfully exploit a vulnerability.
    • User interaction: Specifies whether the success of the exploit is dependent on any action by a user.
    • Scope: Determines whether an exploit impacts resources beyond its security scope.
    • Temporal metrics: Refers to the characteristics of a vulnerability that change over time.
    • Environmental metrics: Customizes the CVSS score based on the importance of the affected IT asset to a user’s organization.

  8. Identify the challenges of data visibility (network, host, and cloud) in detection: Data visibility challenges involve detecting and managing data across different environments where visibility can be obscured by various factors like encryption and the use of multiple layers and technologies.

  9. Identify potential data loss from traffic profiles: This involves analyzing network traffic to detect patterns that may indicate a data breach or unauthorized data exfiltration activities.

  10. Interpret the 5-tuple approach to isolate a compromised host in a grouped set of logs: The 5-tuple approach (source IP, destination IP, source port, destination port, and protocol) is used to isolate network communications unique to a compromised host, aiding in forensic analysis and incident response.

  11. Compare rule-based detection vs. behavioral and statistical detection: Rule-based detection uses set patterns to identify malicious activity, while behavioral and statistical detection analyzes deviations from normal behavior to identify potential threats, providing a dynamic approach to security monitoring.

2.0 Security Monitoring (25%)

This domain focuses on the tools and methods used to monitor and detect security threats within an organization. It delves into the types of data provided by technologies such as TCP dump, NetFlow, and various firewall configurations, explaining their roles in securing networks. Discussions include how certain technologies can impact data visibility, affecting the ability to detect and respond to threats effectively. The curriculum also explains the use of different data types in security monitoring, such as full packet capture and metadata, and their importance in identifying security incidents. This section further describes common network and web application attacks, detailing strategies to mitigate protocol-based, denial of service, and man-in-the-middle attacks, among others. It also covers social engineering and endpoint-based attacks, providing insights into the methods used by attackers and the techniques for defending against them.

This section delves into the methods and technologies used to monitor and detect security threats within an organization:

  1. Compare attack surface and vulnerability: Understanding the attack surface involves analyzing all possible points where an unauthorized user can try to enter data to or extract data from an environment. Comparing this with vulnerabilities, which are weaknesses or gaps in a security program that can be exploited by threats to gain unauthorized access to an asset.

  2. Identify the types of data provided by these technologies:

    • TCP dump: Captures packets that pass through a network interface to monitor network traffic.
    • NetFlow: Provides data about network traffic flow and volume, essential for understanding traffic patterns and behavior.
    • Next-gen firewall: Delivers detailed data about applications, users, and content, facilitating more granular security policies.
    • Traditional stateful firewall: Monitors all state of active connections and determines which network packets to allow through the firewall.
    • Application visibility and control: Helps in identifying and controlling applications running on a network.
    • Web content filtering: Blocks or allows web pages accessible through the network based on policies.
    • Email content filtering: Manages the handling of email to filter out threats such as spam, phishing, and malware.

  3. Describe the impact of these technologies on data visibility:

    • Access control list (ACL): Limits data flows to ensure that only legitimate traffic is allowed, potentially reducing data visibility if too restrictive.
    • NAT/PAT: Hides real IP addresses, which can complicate the monitoring and tracking of individual devices.
    • Tunneling: Encapsulates packets, including their payload and original headers, which can obscure visibility into the data.
    • TOR: Enables anonymous communication, significantly reducing visibility for security monitoring.
    • Encryption: Secures data by converting it into a coded format, often making it invisible to security tools unless decryption occurs.
    • P2P: Peer-to-peer networks can distribute data across many peers, complicating data monitoring and visibility.
    • Encapsulation: Similar to tunneling, it wraps data in additional headers that can hide packet details.
    • Load balancing: Distributes incoming network traffic across multiple servers, potentially obscuring the view of data paths.

  4. Describe the uses of these data types in security monitoring:

    • Full packet capture: Provides a complete record of all network traffic, allowing for detailed analysis and forensic investigations.
    • Session data: Captures essential details about each session that can identify unauthorized access.
    • Transaction data: Records details of transactions processed by the system which can be analyzed for suspicious activities.
    • Statistical data: Offers aggregated data that can highlight trends and patterns used for high-level network overview.
    • Metadata: Provides data about data, which can be crucial for understanding the context of information flows.
    • Alert data: Contains information generated by security systems when an incident or anomaly is detected, essential for prompt response.

  5. Describe network attacks:

    • Protocol-based attacks: Such as IP spoofing where an attacker alters the IP address of their device to impersonate a different device.
    • Denial of Service (DoS) and Distributed Denial of Service (DDoS): Attacks aimed at disrupting the service by overwhelming the target with excessive requests.
    • Man-in-the-middle (MitM): The attacker secretly intercepts and possibly alters the communication between two parties who believe they are directly communicating with each other.

  6. Describe web application attacks:

    • SQL injection: Occurs when an attacker exploits a vulnerability to execute malicious SQL statements that control a web application’s database server.
    • Command injections: The attacker tricks the application into executing an unintended command or accessing unauthorized data.
    • Cross-site scripting (XSS): An attacker injects malicious scripts into content from otherwise trusted websites.

  7. Describe social engineering attacks: These are manipulative techniques that trick users into making security mistakes or giving away sensitive information.

  8. Describe endpoint-based attacks:

    • Buffer overflows: An anomaly where a program overruns the buffer’s boundary and overwrites adjacent memory.
    • Command and control (C2): Techniques that enable attackers to communicate and control compromised systems within a target network.
    • Malware and ransomware: Malicious software designed to disrupt, damage, or gain unauthorized access to a computer system, with ransomware specifically encrypting data to extort ransom from victims.

  9. Describe evasion and obfuscation techniques:

    • Tunneling: Uses protocols like SSH to encapsulate malicious traffic inside legitimate traffic.
    • Encryption: Uses cryptographic techniques to render data unreadable without a decryption key, hiding the content from security tools.
    • Proxies: Hide a user’s real IP address, complicating the ability to track the user’s activities and real location.

  10. Describe the impact of certificates on security: Explains how PKI and certificates enhance security by enabling secure, encrypted communications across the network. Certificates help in authenticating and validating user connections and data exchanges.

  11. Identify the certificate components in a given scenario:

  • Cipher-suite: Defines the encryption algorithms that will be used to secure a secure connection.
  • X.509 certificates: Standard format for public key certificates, which include the public key and identity of the holder.
  • Key exchange: Part of the cryptographic process that secures data transfer by establishing session keys.
  • Protocol version: Specifies the protocol standard used in securing data, such as TLS 1.2 or TLS 1.3.
  • PKCS: Public-Key Cryptography Standards, a set of interoperable standards and guidelines for public-key cryptography.

3.0 Host-Based Analysis (20%)

In this domain, the focus is on the tools and strategies used to analyze and secure individual hosts against cyber threats. It covers endpoint technologies like host-based intrusion detection systems, antimalware, and firewalls, and discusses their roles in protecting individual devices. The section also highlights the importance of understanding operating system components and their implications for security. Key elements of attribution in cybersecurity investigations are discussed, such as identifying assets, threat actors, and indicators of compromise or attack. Additionally, the domain addresses the types of evidence that can be gleaned from logs and how to interpret these to build a coherent defense and response strategy.

Focuses on technologies and techniques used to analyze and secure individual hosts:

  1. Describe the functionality of these endpoint technologies in regard to security monitoring:

    • Host-based intrusion detection: Monitors the computer system for malicious activities or policy violations.
    • Antimalware and antivirus: Software designed to detect, thwart, and remove malicious software.
    • Host-based firewall: Controls the incoming and outgoing network traffic based on predetermined security rules at the host level.
    • Application-level allow listing/block listing: Prevents unauthorized applications from executing in addition to allowing permissible software to run.
    • Systems-based sandboxing: Executes programs in a virtual environment to isolate them from the primary operating system, protecting the system from potential threats.

  2. Identify components of an operating system (such as Windows and Linux) in a given scenario: Refers to understanding critical system components like the kernel, system calls, and file system hierarchy, which are fundamental for managing the system and troubleshooting issues.

  3. Describe the role of attribution in an investigation:

    • Assets: Identifying resources that hold value to the organization and could be targeted by threats.
    • Threat actor: Determining the entity responsible for an incident or breach.
    • Indicators of compromise (IoCs): Artifacts observed on a network or in an operating system that with high confidence indicate a computer intrusion.
    • Indicators of attack (IoAs): Identifies adversarial behavior that seeks to compromise the integrity, confidentiality, or availability of a resource.
    • Chain of custody: Refers to the chronological documentation or paper trail showing the seizure, custody, control, transfer, analysis, and disposition of evidence, physical or electronic.

  4. Identify type of evidence used based on provided logs:

    • Best evidence: Directly demonstrates a fact and is considered strong due to its clear and straightforward connection with the fact it supports.
    • Corroborative evidence: Provides supplementary support to the initial evidence, helping to strengthen the case.
    • Indirect evidence: Offers proof of an intermediate fact that can inferentially prove the main fact.

  5. Compare tampered and untampered disk image: Involves analyzing disk images to identify if they have been altered or left in their original state, which is crucial in forensic investigations to ensure the integrity of the data.

  6. Interpret operating system, application, or command line logs to identify an event: Skill in reading and understanding log files generated by the OS or applications, which can provide valuable insights into what has occurred on the system.

  7. Interpret the output report of a malware analysis tool such as a detonation chamber or sandbox:

    • Hashes: Unique values computed from digital data to detect changes or identify associated pieces of information.
    • URLs: Addresses of resources on the internet which can be malicious or benign.
    • Systems, events, and networking: Understanding how the system’s components interact and are represented in the tool’s report can help trace malicious activities or anomalies.

4.0 Network Intrusion Analysis (20%)

This domain explores techniques and tools used to detect and investigate network-based threats. It includes mapping events to source technologies like IDS/IPS and firewalls, and discusses the significance of differentiating between benign and malicious events. Techniques such as deep packet inspection, traffic interrogation, and the use of taps or traffic monitoring are compared to illustrate their roles in analyzing network traffic. The curriculum also teaches how to extract and analyze files from network traffic using tools like Wireshark and how to interpret protocol headers and other artifacts to identify and respond to network intrusions effectively.

Explores techniques and tools used to detect and investigate network-based threats:

  1. Map the provided events to source technologies:

    • IDS/IPS: Systems designed to detect and prevent threats in real time.
    • Firewall: Monitors and controls incoming and outgoing network traffic based on predetermined security rules.
    • Network application control: Manages applications’ network access.
    • Proxy logs: Record the details of web requests going through the proxy server.
    • Antivirus: Software used to prevent, scan, detect, and delete viruses from a computer.
    • Transaction data (NetFlow): Provides data about the flow of traffic through the network which can be analyzed to detect anomalies.

  2. Compare impact and no impact for these items:

    • False positive: A non-threatening action is incorrectly flagged as malicious by security processes.
    • False negative: A failure of the security system to identify an actual threat, letting it pass through undetected.
    • True positive: A correct identification by the security system that an activity or condition is malicious.
    • True negative: A correct verification that an activity or condition is not malicious.
    • Benign: Something that is safe and does not present a threat.

  3. Compare deep packet inspection with packet filtering and stateful firewall operation: Discusses the differences between inspecting packet payloads (deep packet inspection), filtering packets based on predefined rules (packet filtering), and monitoring the state of active connections (stateful firewall) to ensure secure communications.

  4. Compare inline traffic interrogation and taps or traffic monitoring: Inline traffic interrogation involves actively analyzing and potentially blocking traffic as it flows through the network, whereas taps or traffic monitoring involves passive listening to the traffic without interrupting or altering it.

  5. Compare the characteristics of data obtained from taps or traffic monitoring and transactional data (NetFlow) in the analysis of network traffic: Taps or traffic monitoring provides a real-time and comprehensive view of the data passing through the network, offering detailed insights into traffic attributes. In contrast, NetFlow collects metadata about network transactions, which is less granular but useful for understanding traffic patterns over time.

  6. Extract files from a TCP stream when given a PCAP file and Wireshark: Demonstrates the ability to use Wireshark, a network protocol analyzer, to capture and analyze packets from a network, including extracting files from the data flows captured in PCAP files.

  7. Identify key elements in an intrusion from a given PCAP file:

    • Source address: The initiating source of the network traffic.
    • Destination address: The target or recipient of the network traffic.
    • Source port: The port number used by the source for the connection.
    • Destination port: The port number at the destination.
    • Protocols: The communication protocols used in the network session.
    • Payloads: The data part of a packet, typically carrying the actual message data as opposed to headers.

  8. Interpret the fields in protocol headers as related to intrusion analysis:

    • Ethernet frame: Basic unit of data transmission in Ethernet networks.
    • IPv4/IPv6: Internet Protocol versions that provide the logical addressing which is required for the data packets to be routed through netpaths.
    • TCP/UDP: Transport protocols that manage the data transmission process.
    • ICMP: Used for diagnostic or control purposes or generated in response to errors in IP operations (notably, when routers cannot route a packet to its destination).
    • DNS: Translates more readily memorized domain names to the numerical IP addresses needed for locating and identifying computer services and devices.
    • SMTP/POP3/IMAP: Protocols used for sending and retrieving emails.
    • HTTP/HTTPS/HTTP2: Protocols used for secure communication over a computer network, with HTTPS being the secure version of HTTP.
    • ARP: Resolves IP addresses to machine (MAC) addresses.

  9. Interpret common artifact elements from an event to identify an alert:

    • IP address (source/destination): Helps in identifying the origins and targets of network traffic.
    • Client and server port identity: Provides additional context about the application or service used in the communication.
    • Process (file or registry): Indicates which processes were involved in the event, which can be crucial for identifying malicious activities.
    • System (API calls): Reveals interactions with the operating system that could indicate malicious behavior.
    • Hashes: Used to verify the integrity of data or detect changes in data.
    • URI/URL: Helps in determining the nature of the request or the type of resources being accessed.

  10. Interpret basic regular expressions: Involves understanding and crafting regex patterns, which are crucial for searching, matching, and manipulating text based on specific patterns. This skill is essential for analyzing logs and data streams effectively.

5.0 Security Policies and Procedures (15%)

The final domain covers the policies and procedures that govern the secure operation of an organization’s information systems. It discusses management concepts such as asset and patch management, which are essential for maintaining the security posture of an organization. The curriculum also outlines the elements of an incident response plan as per NIST.SP800-61, detailing steps from preparation to recovery and post-incident analysis. This section emphasizes the importance of aligning organizational stakeholders with security practices and describes the principles of evidence collection and preservation as documented in NIST.SP800-86. Additionally, techniques for network and server profiling are explored, providing insights into how to monitor and secure critical infrastructure effectively.

This domain focuses on the policies and procedures that govern the secure operation of an organization’s information systems:

  1. Describe management concepts:

    • Asset management: The process of ensuring that the assets, which are valuable to an organization, are accounted for, deployed, maintained, upgraded, and disposed of appropriately.
    • Configuration management: Involves maintaining the system’s configuration information, and controlling changes to ensure system integrity over time.
    • Mobile device management: Involves the administration of mobile devices, such as smartphones, tablets, and laptops.
    • Patch management: The process of managing a network of computers by regularly deploying all missing patches to keep computers up to date.
    • Vulnerability management: Involves identifying, classifying, remediating, and mitigating vulnerabilities in software.

  2. Describe the elements in an incident response plan as stated in NIST.SP800-61: Details the structured approach for handling cybersecurity incidents, including preparation, detection and analysis, containment, eradication, and recovery, and post-incident activities.

  3. Apply the incident handling process such as NIST.SP800-61 to an event: Utilizes the guidelines provided by NIST for incident response to manage and mitigate security incidents effectively.

  4. Map elements to these steps of analysis based on the NIST.SP800-61:

    • Preparation: Establishing and maintaining the capability to respond to incidents.
    • Detection and analysis: The process of detecting and analyzing events to determine whether they constitute security incidents.
    • Containment, eradication, and recovery: Steps taken to contain incidents, remove the cause, and recover systems to operational status.
    • Post-incident analysis (lessons learned): Reviewing and learning from the incident to improve future response efforts.

  5. Map the organization stakeholders against the NIST IR categories (CMMC, NIST.SP800-61): Identifies how different stakeholders in an organization align with the various categories and phases of the NIST Incident Response process.

  6. Describe concepts as documented in NIST.SP800-86:

    • Evidence collection order: The sequence in which evidence should be collected to preserve its integrity.
    • Data integrity: Ensuring that data is accurate, consistent and unaltered.
    • Data preservation: The process of maintaining and safeguarding the information within an organization.
    • Volatile data collection: Involves collecting data that could be lost when the system is powered down or altered during the course of investigation.

  7. Identify these elements used for network profiling:

    • Total throughput: Measures the amount of data passing through a network at any given time.
    • Session duration: The amount of time a session remains active in a network.
    • Ports used: Identifies which TCP or UDP ports are being utilized, which can indicate types of services or applications.
    • Critical asset address space: Addresses assigned to critical systems within the network.

  8. Identify these elements used for server profiling:

    • Listening ports: Ports on which the server is accepting incoming connections.
    • Logged in users/service accounts: Accounts that are currently accessing the server.
    • Running processes: Applications or tasks that are active on the server.
    • Running tasks: Jobs that are scheduled or currently running on the server.
    • Applications: Software applications installed and running on the server.

  9. Identify protected data in a network:

    • PII (Personally Identifiable Information): Information that can be used to distinguish or trace an individual’s identity.
    • PSI (Protected Sensitive Information): A broader category that may include business secrets, classified national defense information, and sensitive financial data.
    • PHI (Protected Health Information): Information about health status, provision of health care, or payment for health care that can be linked to an individual.
    • Intellectual property: Legal rights that result from intellectual activity in the industrial, scientific, literary, and artistic fields.

  10. Classify intrusion events into categories as defined by security models, such as Cyber Kill Chain Model and Diamond Model of Intrusion:

    • The Cyber Kill Chain Model helps in identifying and preventing cyber intrusions by detailing each stage of a cyber attack.
    • The Diamond Model of Intrusion provides a more multifaceted approach by examining the capabilities, infrastructure, and goals of an adversary.

  11. Describe the relationship of SOC metrics to scope analysis:

    • Time to detect: Measures how quickly a security threat is identified.
    • Time to contain: The speed with which a threat is neutralized or contained.
    • Time to respond: The overall time taken to respond to a detected security event.
    • Time to control: The duration to bring everything back under normal operational control after an incident.

Related Articles

Responses

Your email address will not be published. Required fields are marked *

🚀Unlock Lifetime Access For Just $49
This is default text for notification bar