Coder Social home page Coder Social logo

cloud-crunch-sdk's People

Stargazers

 avatar

Watchers

 avatar

cloud-crunch-sdk's Issues

Integrate with SIEM systems

Security Information and Event Management (SIEM) is a type of software that gathers log data and real-time event data generated from systems throughout an organization?s technology infrastructure and turns it into actionable information. Important uses of SIEM tools include alerting security staff about immediate security issues that need to be dealt with, and automated monitoring for compliance with important regulations such as HIPAA, PCI/DSS, and GDPR.

Among the systems that a SIEM solution collates data from are antivirus software, firewalls, host systems, database servers, web filters, network switches, and more. The SIEM tool takes all this information and aggregates it for a broad end-to-end view. The software then analyzes the aggregate data to find patterns and correlations and provide reports on security-related incidents and events.

Best Practices

  • Brush up on networking best practices.
  • Establish requirements to get a well-defined picture of the requirements for your SIEM deployment, including objectives, prioritized targets from those objectives, and the overall workflow.
  • Collect as much diverse data as possible. More information is better when it comes to feeding data into a SIEM system because the essential role of these tools is to take a bunch of log and event data and answer meaningful questions from it. But those questions can?t be answered without sufficient data.
  • Have a comprehensive incident response plan. A huge part of the appeal of SIEM is that it provides real-time monitoring and alerts for IT threat detection, facilitating rapid responses to a range of security incidents. However, the onus for properly responding to these incidents falls on the organization that implements SIEM ? not on the tool itself.

Lock Down Virtual Machines

Locking down virtual machines is crucial to ensure you are not keeping doors open for unwanted company. Here is a list of the best practices to ensure your VMs are hardened.

Best Practices

  • Disable the following Protocols on VMs:
  • Remote Desktop Protocol (RDP).
  • Secure Shell (SSH).
  • Ensure patches are up to date.
  • Lock down serial ports and other peripheral interfaces not being utilized.

Have a disaster recovery plan in place for cryptographic key problems

If a key is lost or needs to be revoked in an emergency, make sure you have a plan on how to recover any data which is protected with that key.

Special consideration must be given if you are using a non-public certificate authority, which delegates operational responsibilities to you, the administrator. If you are using a public CA, the points below largely do not apply

Best Practices

  • Consider how to recover data if a key is lost (emergency key, second master key held by another person)
  • Rule of thumb: If all encryption keys for data are lost, the data itself is lost
  • Consider how to revoke keys if a key is compromised (certificate revocation lists, secret/key versioning)
  • Consider how to securely backup/restore keys, if appropriate
  • Use soft delete and purge protection features in Key Vault to mitigate accidental/malicious deletion

Control Routing Behavior

When you put a virtual machine on a virtual network, the VM can connect to any other VM on the same virtual network, even if the other VMs are on different subnets. This is possible because a collection of system routes enabled by default allows this type of communication. These default routes allow VMs on the same virtual network to initiate connections with each other, and with the internet (for outbound communications to the internet only).

Best Practices

  • Tune the default routing table entries in your virtual network. Although the default system routes are useful for many deployment scenarios, there are times when you want to customize the routing configuration for your deployments. You can configure the next-hop address to reach specific destinations.
  • Ensure that your services are not allowed to initiate a connection to devices on the Internet by enabling Forced tunneling, all connections to the internet are forced through your on-premises gateway. You can configure forced tunneling by taking advantage of UDRs (user defined routes).

Advanced Threat Protection

Advanced Threat Protection for Azure SQL Database, Azure SQL Managed Instance and Azure Synapse Analytics detects anomalous activities indicating unusual and potentially harmful attempts to access or exploit databases.

Advanced Threat Protection is part of the Azure Defender for SQL offering, which is a unified package for advanced SQL security capabilities. Advanced Threat Protection can be accessed and managed via the central Azure Defender for SQL portal.

Best Practices

Below contain technologies that we recommend be implemented in your environment. Not all the recommended technologies should be adopted, only the solutions that work best for you and your customer.

Azure Security Center

  • Create alerts for SQL Database and Azure Synapse Analytics in Azure Security Center.
  • Integrate advanced threat protection alerts with Azuresecuritycenter.
  • Gather log data from the following alerts:
  • Access from unusual locations
  • Unusual data extraction
  • A possible vulnerability to SQL Injection
  • Attempted logon by a potentially harmful application
  • Log on from an unusual Azure Data Center
  • Login from a principal user not seen in 60 days
  • Potential SQL Brute Force attempt
  • Potential SQL injection

Azure Defender

  • Implement Azure Defender for SQL offering, which is a unified package for advanced SQL security capabilities. Advanced Threat Protection can be accessed and managed via the central Azure Defender for SQL portal.
  • Enable Vulnerability Assessment which is an easy-to-configure service that can discover, track, and help you remediate potential database vulnerabilities. It provides visibility into your security state, and it includes actionable steps to resolve security issues and enhance your database fortifications.

Azure Active Directory

  • Implement Azure ADIdentityProtection is an Azure Active Directory Premium P2 edition feature that provides an overview of the risk detections and potential vulnerabilities that can affect your organization?s identities.
  • Adopt Azure AD anomaly-detection capabilities that are available through Azure AD Anomalous Activity Reports and introduces new risk detection types that can detect real time anomalies.
  • Configure Azure Active Directory Identity Protection to monitor and report tool assets. Based on risk detections, Identity Protection calculates a user risk level for each user, so that you can configure risk-based policies to automatically protect the identities of your organization.

Azure Monitor Logs

  • Implement AzureMonitorLogs is a Microsoft cloud-based IT management solution that helps you manage and protect your on-premises and cloud infrastructure.

Create a Key Management Plan

Protecting your keys is essential to protecting your data in the cloud. Please refer to our key management best practices to gain a better depth of security recommended key management practices. Encryption Key Management is administering the full lifecycle of cryptographic keys. The proper management of cryptographic keys is essential to the effective use of cryptography for security.

Best Practices

  • Create a key management plan to better safeguard your data.
  • Only use NIST standard encryption algorithms such as AES-GCM or AES-CCM to help mitigate risks related to unauthorized data access.
  • Create strong key materials to protect your data.
  • Rotate encryption keys often.
  • Ensure data integrity checks on the data exist.

Protect Data at Rest

Encryption at Rest is a common security requirement which includes information that resides in persistent storage on physical media, in any digital format meant to protect the confidentiality of data. The media can include files on magnetic or optical media, archived data, storage objects, containers, and data backups.

Encryption at rest provides data protection for stored data (at rest). Attacks against data at-rest include attempts to obtain physical access to the hardware on which the data is stored, and then compromise the contained data.

Best Practices

  • Encrypt all data that needs to be stored at every stop in the transit process.
  • Verify all data entering and leaving databases are encrypted. and decrypted properly.
  • Ensure disk encryption is utilizing strong algorithms.
  • A symmetric encryption key is used to encrypt data as it is written to storage.
  • Data may be partitioned, and different keys may be used for each partition.
  • Ensure that the data is never persisted in unencrypted form.
  • Keys must be stored in a secure location with identity-based access control and audit policies.

Anti-Malware Monitoring

With so many processes and services hinging on computing technology, users must learn how to protect their devices and sensitive data against cybercriminals. There are several strategies that can be leveraged as part of a layered security approach to malware protection and prevention, including the following:

Best Practices

  • Ensure that all security updates and patches are installed.
  • Avoid suspicious links and emails and websites.
  • Look into programs before downloading.
  • Leverage strong, unique passwords.
  • Install industry-leading anti-virus software. One of the best protections against malware and other security risks is the use of an advanced anti-virus program. This software keeps a watchful eye over the system for any sign of infection and any symptoms associated with an attack. This way, users can be proactive in their malware protection, blocking and preventing infiltrations before they happen.

Network Monitoring

Network intrusion detection systems (NIDS) are placed at a strategic point or points within the network to monitor traffic to and from all devices on the network. It performs an analysis of passing traffic on the entire subnet and matches the traffic that is passed on the subnets to the library of known attacks.

Best Practices

Azure Network Watcher

  • Integrate NetworkSecurityGroup (NSG) flow logs to gather information about IP traffic flowing through an NSG. Flow data is sent to Azure Storage accounts from where you can access it as well as export it to any visualization tool, SIEM, or IDS of your choice.
  • Create capture packets of data to and from VMs
  • Patrol inbound or outbound traffic to a network interface in a VM with NSG.
  • Enable connectionmonitor to provide unified end-to-end connection monitoring in Azure Network Watcher.
  • Configure Network watcher as a NIDS by integrating SuricataLogstash, ElasticSearch, and Kibana

Identify Sensitive Data

Discover, classify, and label your sensitive data so that you can design the appropriate controls to ensure sensitive information is stored, processed, and transmitted securely by the organization's technology systems.

Data security strategies are being shaped primarily by:

  • Data sprawl: Sensitive data is being generated and stored on a nearly limitless variety of devices and cloud services where people creatively collaborate.
  • New model: The cloud enables new models of "phone home for key" to supplement and replace classic data loss protection (DLP) models that "catch it on the way out the door"
  • Regulations like General Data Protection Regulation (GDPR) are requiring organizations to closely track private data and how applications are using it.

Best Practices

  • Use data classification tools to classify the files in your cloud apps.
  • Ensure consistent access control is aligned to your enterprise segmentation strategy.
  • Generate a CSE data flow diagram to gain a better breadth and depth of the system and help identify sensitive data.

Identify Protection Mechanisms

When initiating cryptographic protections for information, the strongest algorithm and key size that is appropriate for providing the protection should be used to minimize costly transitions. However, it should be noted that selecting some algorithms or key sizes that are unnecessarily large might have adverse performance effects

Certain protective measures may be taken to minimize the likelihood or consequences of a key compromise. The following procedures are usually involved:

Best Practices

  • In general, a single key should be used for only one purpose, e.g., encryption, authentication, key wrapping, random number generation, or digital signatures.
  • Limit the amount of time a symmetric or private key is in plaintext form.
  • Preventing humans from viewing plaintext symmetric and private keys.
  • Restricting plaintext symmetric and private keys to physically protected containers. This includes key generators, key-transport devices, key loaders, cryptographic modules, and key-storage devices.
  • Using integrity checks to ensure that the integrity of a key or its association with other data has not been compromised.
  • Provide a cryptographic integrity check on the key (e.g., using a MAC or a digital signature).
  • Destroy keys as soon as they are no longer needed.
  • Store key information in some Secure Processor or TPM that way if a reverse engineer tries to extract keying information it will make the process significantly more difficult to figure out your keys. If a reverse engineer gets their hands on your IoT device they can hook it up to a lab bench and start probing the system with differential power analysis alongside other methods to extract this information, but if it's stored within a trusted device then this will also make their jobs difficult.

Log Monitoring

Log monitoring helps to speed up the process of identifying specific exceptions as they occur and the frequency at which they occur. Additionally, it provides developers and support personnel with a greater level of visibility into the systems and applications being monitored. Information is readily available in a way that?s easy to consume. Discuss customer requirements and expectations to adopt an appropriate log leveling in your customer application.

Best Practices

  • Ensure Logging is enabled on all critical assets.
  • Assign criticality to logs and pay close attention to the ones that contain critical program information.
  • Craft together rules and alerts to send information to an admin if a rule threshold has been tripped.
  • Inspect logs often and frequently to stay on top of potential danger.
  • Implement a continuous log monitoring solution to ensure system health and security.
  • Implement a Host based Intrusion Detection System (HIDS) solution. A HIDS monitors the inbound and outbound packets from the device only and will alert the user or administrator if suspicious activity is detected. It takes a snapshot of existing system files and matches it to the previous snapshot. If the critical system files were modified or deleted, an alert is sent to the administrator to investigate. An example of HIDS usage can be seen on mission critical machines, which are not expected to change their configurations.

Metrics

Azure Monitor Metrics is a feature of Azure Monitor that collects numeric data from monitored resources into a time series database. Metrics are numerical values that are collected at regular intervals and describe some aspect of a system at a particular time. Metrics in Azure Monitor are lightweight and capable of supporting near real-time scenarios making them particularly useful for alerting and fast detection of issues. You can analyze them interactively with metrics explorer, be proactively notified with an alert when a value crosses a threshold or visualize them in a workbook or dashboard.

Best Practices

Azure Monitor

  • Implement Azure Monitor Metrics to collects numeric data from monitored resources into a time series database.
  • Integrate metrics collection from Azure Resources and Application Insights.
  • Define custom metrics in your application that's monitored by Application Insights or create custom metrics for an Azure service using the custom metrics API.
  • Enable guest OS metrics for Windows virtual machines with Windows Diagnostic Extension (WAD) and for Linux virtual machines with InfluxData Telegraf Agent to collect VM metrics.

Integrate with SIEM systems

Security Information and Event Management (SIEM) is a type of software that gathers log data and real-time event data generated from systems throughout an organization?s technology infrastructure and turns it into actionable information. Important uses of SIEM tools include alerting security staff about immediate security issues that need to be dealt with, and automated monitoring for compliance with important regulations such as HIPAA, PCI/DSS, and GDPR.

Among the systems that a SIEM solution collates data from are antivirus software, firewalls, host systems, database servers, web filters, network switches, and more. The SIEM tool takes all this information and aggregates it for a broad end-to-end view. The software then analyzes the aggregate data to find patterns and correlations and provide reports on security-related incidents and events.

Best Practices

Below contain some SIEM Azure technologies that we recommend be implemented in your environment. Not all the recommended technologies should be adopted, only the solutions that work best for you and your customer.

Azure Sentinel

  • Adopt Azure Sentinel for a scalable, cloud-native, security information and event management (SIEM) and security orchestration automated response (SOAR) solution. Azure Sentinel provides intelligent security analytics and threat intelligence via alert detection, threat visibility, proactive hunting, and automated threat response.
  • Increase the speed and scalability of your SIEM solution by using a cloud based SIEM.
  • Find the most serious security vulnerabilities so you can prioritize investigation with Azure secure score to see the recommendations resulting from the Azure policies and initiatives built into Azure Security Center.
  • Connect Azure Sentinel to data sources.
  • Continuously monitor with Azure Sentinel using Azure Sentinel integration with AzureMonitorWorkbooks.
  • Implement notifications when something suspicious occurs by configuring Azure Sentinel

Traces

Traces are a representation of logs; the data structure of traces looks like that of an event log and generated end to end visibility within your coding flow. A single trace can provide visibility into both the path traversed by a request as well as the structure of a request and can track delays in the process. The path of a request allows software engineers and SREs to understand the different services involved in the path of a request, and the structure of a request helps one understand the junctures and effects of asynchrony in the execution of a request. It should be noted that tracing adds complexity and with tracing enabled the overall system performance will be worsened.

Best Practices

  • Enable distributed tracing across the services in an application, this is as simple as adding the proper agent, SDK, or library to each service, based on the language the service was implemented in.
  • Trace a starting event when beginning a method, which should represent a single logical operation or a pipeline, along with a string representation of the parameter values passed into the method.
  • Trace an informative event when inserting an item into the database.
  • Trace an informative event when taking one path or another in an important if/else statement.
  • Trace an error in a catch block depending on whether it is a recoverable error.
  • Trace a halted event when finishing the execution of the method.

Classify Sensitive Data

Data classification allows you to determine and assign value to your organization's data and provides a common starting point for governance. The data classification process categorizes data by sensitivity and business impact in order to identify risks. When data is classified, you can manage it in ways that protect sensitive or important data from theft or loss.

If no standard exists, you might want to use this sample classification to better understand your own digital estate and risk profile.

  • Non-business: Data from your personal life that doesn't belong to Microsoft.
  • Public: Business data that is freely available and approved for public consumption.
  • General: Business data that isn't meant for a public audience.
  • Confidential: Business data that can cause harm to Microsoft if overshared.
  • Highly confidential: Business data that would cause extensive harm to Microsoft if overshared.

Best Practices

Deploy Perimeter Networks for Security Zones

A Perimeter Network, also known as a Demilitarized Zone (DMZ) is a physical or logical network segment that provides an additional layer of security between your assets and the internet.

Perimeter networks are useful because you can focus your network access control management, monitoring, logging, and reporting on the devices at the edge of your virtual networks. A perimeter network is where you typically enable distributed denial of service (DDoS) prevention, intrusion detection/intrusion prevention systems (IDS/IPS), firewall rules and policies, web filtering, network antimalware, and more. The network security devices sit between the internet and your virtual network and have an interface on both networks.

Best Practices

It is strongly recommended to secure and configure the following perimeter network functionality:

  • Enable DDoS prevention.
  • Configure Intrusion detection system (IDS).
  • Configure Intrusion prevention system (IPS).
  • Ensure your DMZ has no possibility to directly access you LAN.
  • Have a 3-layer filter for your web server.
  • Ensure strong firewall rules are in place.

Create Strong Key Materials

The following recommendations are offered regarding the disposition of this other keying material. In computer cryptography keys are integers. In some cases, keys are randomly generated using a random number generator (RNG) or pseudorandom number generator (PRNG), the latter being a computer algorithm that produces data which appears random under analysis. Some types the PRNGs algorithms utilize system entropy to generate a seed data, such seeds produce better results, since this makes the initial conditions of the PRNG much more difficult for an attacker to guess. It is in the security engineers' best interest to assure strong algorithms, random number generators, and PUFs are used when generating strong secure key materials.

Best Practices

  • Create Physical Unclonable Function (PUF) based keys. Keying information holds all the secrets that we do not want to leak to any adversary. It is strongly recommended to keep the key information as complex as possible to prevent any kind of crack able pattern. Therefore, a PUF based key is especially important because it can be created with software logic along with a hardware component to make it almost impossible to recreate.
  • Utilize a non-deterministic random bit generator to create strong random numbers utilized for key materials.
  • Ensure each encryption algorithms contains an Initialization Vector (IV) associated with the information that it helps to protect and is needed until the information and its protection are no longer needed.
  • Shared secrets generated during the execution of key-agreement schemes shall be destroyed as soon as they are no longer needed to derive keying material.
  • Random number seeds shall be destroyed immediately after use.
  • Ensure key metadata is destroyed after use.

Anti-Malware Monitoring

With so many processes and services hinging on computing technology, users must learn how to protect their devices and sensitive data against cybercriminals. There are several strategies that can be leveraged as part of a layered security approach to malware protection and prevention, including the following:

Best Practices

Microsoft Anti-Malware

  • Configure Microsoft Antimalware for Cloud Services and Virtual Machines
  • Enable Real-time protection to monitor activity in Cloud Services and on Virtual Machines to detect and block malware execution.
  • Scheduled scans periodically to detect malware, including actively running programs.
  • Enable Antimalware monitoring for your Cloud Service or Virtual Machine to have the Antimalware event log events written as they are produced to your Azure storage account.
  • Enable Antimalware in VMs

Network Monitoring

Ensuring healthy network traffic to both user and application activity utilize management and monitoring tools to monitor your network's security. Most cloud monitoring tools are used to monitor, diagnose, view metrics, and enable or disable logs for resources in a cloud virtual network.

Best Practices

  • Establish a network baseline to set alerts.
  • Define critical assets to monitor.
  • Use tools that provide end-to-end network visibility.
  • Implement proper configuration management.
  • Set up an alert escalation matrix.
  • Enable auto-remediation of network issues.

Use Strong Network Controls

Network access control is a method of enhancing the security of a private organizational network by restricting the availability of network resources to endpoint devices that comply with the organization?s security policy. This section provides guidelines to harden these controls from bad actors.

Best Practices

  • Enable Security Benchmarks that focus on cloud-centric control areas. These controls are consistent with well-known security benchmarks described by the Center for Internet Security (CIS) Controls Version 7.1.
  • Set up a firewall to keep unauthorized traffic out.
  • Ensure subnet provisioning is configured.
  • Establish a network baseline to set alerts.
  • Implement proper and secure configuration management.
  • Maintain and securely store images.
  • Create server-access rules and enforce them to limit the number of individuals who can create a security threat.
  • implement automated configuration management systems.
  • Continuously monitor your network so that you can identify security issues before they become threats.
  • Install an anti-malware software to keep viruses, ransomware, trojans, worms, adware, spyware and more out. Install patches as soon as they become available so that your defenses stay strong.

Logically Segment Subnets

Segment the larger address space into subnets using Classless Inter-Domain Routing (CIDR) based subnetting principles to create your subnets. Routing between subnets will happen automatically and you do not need to manually configure routing tables. To create network access controls between subnets, you?ll need to put Network Security Group (NSG) between the subnets.

Best Practices

  • Do not assign allow rules with broad ranges (allow 0.0.0.0 through 255.255.255.255).
  • Confirm access permissions and routing between subnets.
  • Use CIDR-based subnetting principles to create your subnets.
  • When you use network security groups for network access control between subnets, ensure you put resources that belong to the same security zone or role in their own subnets.
  • Define subnets broadly to ensure that you have flexibility for growth.
  • Define an Application Security Group for lists of IP addresses that you think might change in the future or be used across many network security groups. Be sure to name Application Security Groups clearly so others can understand their content and purpose.

Log Monitoring

Azure provides a wide array of configurable security auditing and logging options to help you identify gaps in your security policies and mechanisms. This article discusses generating, collecting, and analyzing security logs from services hosted on Azure. Discuss customer requirements and expectations to adopt an appropriate log leveling in your customer application.

Best Practices

Depending on the logs you are interested a difference solution will be adopted into your infrastructure, let's take a closer look at the types of logs you should be expecting to see with Azure Logs:

  • Control/management logs provide information about Azure Resource Manager CREATE, UPDATE, and DELETE operations. For more information, see Azure activity logs.
  • Data plane logs provide information about events raised as part of Azure resource usage. Examples of this type of log are the Windows event system, security, and application logs in a virtual machine (VM) and the diagnostics logs that are configured through Azure Monitor.
  • Processed events provide information about analyzed events/alerts that have been processed on your behalf. Examples of this type are Azure Security Center alerts where Azure Security Center has processed and analyzed your subscription and provides concise security alerts.

Azure Monitor

  • Ingest Activity logs to provides insight into the operations that were performed on resources in your subscription.
  • Ingest Azure Resource Logs to provides insight into operations that your resource itself performed.
  • Enable Application Insights to monitor your live applications. It will automatically detect performance anomalies, and includes powerful analytics tools to help you diagnose issues and to understand what users do with your app.
  • Integrate SecurityCenter alerts discusses how to synchronize Security Center alerts, virtual machine security events collected by Azure diagnostics logs, and Azure audit logs with your Azure Monitor logs

Azure Active Directory

Windows Event Log

  • Gather logs from Virtual machines and cloud services to capture system data and logging data on the virtual machines and transfers that data into a storage account of your choice.

Securing Virtual Networks

Virtual network security can be a crucial element of software-defined networking (SDN). Virtualization of networks can deliver flexibility and efficiencies not present in logical networks based on hardware architectures.

Use virtual network service endpoints to extend your virtual network private address space, and the identity of your virtual network to the cloud services, over a direct connection. Endpoints allow you to secure your critical cloud service resources to only your virtual networks.

Best Practices

  • Configure optimal routing for cloud service traffic from your virtual network: Any routes in your virtual network that force internet traffic to your on-premises and/or virtual appliances, known as forced tunneling, also force cloud service traffic to take the same route as the internet traffic. Service endpoints provide optimal routing for cloud traffic.
  • Lock down virtual identity and access control.
  • Secure service resources within a virtual network to provide improved security by fully removing public internet access to resources and allowing traffic only from your virtual network.
  • Ensure optimal routing in your network, which allows you to continue auditing and monitoring outbound internet traffic from your virtual networks, through forced tunneling, without affecting service traffic.

Separate application secrets by environment

Isolating environment secrets helps to minimize the blast radius of a potential compromise.

Best Practices

  • Create and use different secrets for different environments
  • When using Key Vault, have different identities to manage different sets of keys
  • Ensure as much separation exists between the environments as possible

Use Virtual Network Appliances

A network virtual appliance is a VM that performs a network function: a security system, WAN optimization, or other network function.

Network security groups and user-defined routing can provide measures of network security at the network and transport layers of the OSI model. But in some situations, you want or need to enable security at high levels of the stack.

Best Practices

  • Enable two-factor authentication to secure remote access to the network and its data.
  • Configure network appliances to enable high-availability and security to the infrastructure: DNS, DHCP, and IP address management.
  • Enable firewall protection to establish a barrier between the network and internet.
  • Lock down network appliance settings for wireless network access points.
  • Integrate intrusion detection systems to monitor activity and policy violations.

Network Monitoring

Network intrusion detection systems (NIDS) are placed at a strategic point or points within the network to monitor traffic to and from all devices on the network. It performs an analysis of passing traffic on the entire subnet and matches the traffic that is passed on the subnets to the library of known attacks.

Best Practices

  • Configure inbound and outbound traffic on critical assets.
  • Configure subnets properly.
  • Adopt a Network security monitoring solution. The benefit is to know in real-time what is happening on the network at a detailed level, and strengthen security by hardening the processes, devices, appliances, software policies, etc.
  • Deploy the NIDS behind the firewall to filter out everything that is not meant to enter the LAN.
  • Capture VLANs separately. Since VLANs are separate TCP broadcast domains, separately gathering and analyzing data for each can help detect internal and external security problems quickly.
  • Monitor all layers; don?t leave anything to chance. Usually, the data link layer is omitted from monitoring; however, since a new wave of attacks can exploit Ethernet frames too, it is important to take this layer into account. The same applies to the network layer, as most internal attacks can easily use it to exploit vulnerabilities.

Architect Key States and Key Transitions

During the lifetime of cryptographic information, the information is either ?in transit? (e.g., is in the process of being manually distributed or distributed using automated protocols to the authorized communications participants for use by those entities) or is ?at rest? (e.g., the information is in storage). A key is used differently, depending upon its state in the key?s lifecycle. Key states are defined from a system point-of-view, as opposed to the point-of-view of a single cryptographic module.

Best Practices

  • Establish key protocols need to be carefully designed to not give secret information to a potential attacker. For example, a protocol that indicates abnormal conditions, such as a data integrity error, may permit an attacker to confirm or reject an assumption regarding secret data.
  • Ensure each key state: pre-activation state, active state, deactivate state, destroyed state, and compromised state are accounted for.

Preparing for an Audit

An audit is a technical assessment of an organization?s infrastructure, operating systems, applications, and more. Risk management audits force us to be vulnerable, exposing all our systems and strategies. They help us stay ahead of insider threats, security breaches, and other cyberattacks that put our company?s security, reputation, and finances on the line. This list of best practices will instruct the best practices on how to best prepare for an up-and-coming audit.

Best Practices

  • Enable Diagnostic Settings on Azure resources for access to audit, security, and diagnostic logs. Activity logs, which are automatically available, include event source, date, user, timestamp, source addresses, destination addresses, and other useful elements.

Rotate secrets regularly

Good practice (and many compliance/regulatory programs) recommends that you have an operational strategy for rotating keys over the life of the application

Best Practices

  • Pick a reasonable time for rotation. Consider key strength and what is being protected.
  • Rotate secrets in such a way that a highly available infrastructure is leveraged to reduce downtime where possible
  • Rotate keys/certificates with the same parameters and strength as what is being replaced
  • Don't create secrets with empty expiration dates
  • When re-encryption of data is necessary, use a longer period between rotations if possible, to reduce resource usage
  • One option is to leverage Data Encryption Key/Key Encryption Key schemes as described in this document, which fit well into the Key Vault model.
  • When an application is turned over from CSE to the customer at the end of the engagement, all secrets should be rotated to prevent CSE engineers from retaining access to resources

Protect Data in Transit

Data in transit should be protected against 'out of band' attacks (e.g. traffic capture) using encryption to ensure that attackers cannot easily read or modify the data. When data is being transferred between components, locations, or programs, it's in transit. Examples are transfer over the network, across a service bus (from on-premises to cloud and vice-versa, including hybrid connections.

Best Practices

  • Use Transport Layer Security (TLS) protocol, version 1.2 or greater, to protect data when it's traveling between the cloud services and customers.
  • For data moving between your on-premises infrastructure, consider appropriate safeguards such as HTTPS or VPN.
  • Enable a data-link layer encryption method using the IEEE 802.1AE MAC Security Standards (MACsec) is applied from point-to-point across the underlying network hardware.
  • Configure Perfect Forward Secrecy (PFS) to protect connections between customers' client systems and cloud services by unique keys.
  • Move larger data sets over a dedicated high-speed WAN link.
  • For services that implement an Internet-facing SMTP listener, the SMTP MTA Strict Transport Security (https://datatracker.ietf.org/doc/rfc8461/) protocol must be enabled to mitigate the risk of encryption downgrades against STARTTLS.
  • Untrusted data must always be deserialized with known-good deserializers that constrain deserializable object types to those in a list of expected types.
  • Require explicit integrity checks to add a cryptographic integrity check (such as an HMAC or digital signature) to the payload and checking before deserializing/unencrypting the data would prevent any tampering.

Penetration Testing

Validating security defenses is as important as testing any other functionality. Make penetration testing a standard part of your build and deployment process. Schedule regular security tests and vulnerability scanning on deployed applications, and monitor for open ports, endpoints, and attacks. Pen testing can help compliance by validating existing security controls or defenses. Often, in fact, regulatory standards also prescribe the utilization of specific technical tools, firewalls and antivirus as well as measures for the physical and digital protection of data.

Fuzz testing is a method for finding program failures by supplying malformed input data to program interfaces that parse and consume this data. It is also a fantastic way to provide additional testing on interfaces, function inputs.

Best Practices

  • Prepare a security testing plan.
  • Prepare a testing environment.
  • Gather the best tools for the test.
  • Build an attack arsenal.
  • Prepare test cases: Application mapping.
  • Attacking the client: Binary and file analysis to find openings.
  • Network attacks: Install traffic, run traffic, and analyze findings.
  • Staging server attacks.

Choose Strong Encryption Algorithms

The topic of which encryption algorithm to use all boils down a simple question. How much time can you sacrifice for security? We all know the boot process can take a long time and certain encryption algorithms can almost double that time. National Institute of Standards and Technology (NIST) creates standards for encryption and hand selects the cream of the crop regarding the strongest encryption used in the world, by reading their encryption standards it will be easy for you and your team to select a safe encryption algorithm that works best for your solution.

Best Practices

  • We recommend adopting Elliptic Curve Cryptography (ECC) which is a very secure algorithm with complex mathematics to ensure the strongest encryption of today.
  • Asses the data that needs to be encrypted by identifying the critical program information, when you have all the data in front of you, it can be easily determined to use certain algorithms depending on the criticality.
  • Ensure you are using one of the approved cryptographic algorithms: hash functions, symmetric key algorithms, and asymmetric key algorithms.
  • Assess the encryption tools provided to you from the vendor and determine if they are strong enough, otherwise create tools to utilize outside crypto engines.
  • Never roll your own crypto.

Identify Critical Program Information

Critical Program Information (CPI) analysis is how programs identify, protect, and monitor CPI. This analysis should be conducted early and throughout the life cycle of the program. Additionally, because CPI is critical to U.S. technological superiority, its value extends beyond any one program. As a result, CPI analysis should consider the broader impact of CPI identification and protection on national security

The sensitivity of the information that will need to be protected by the system for the lifetime of the new algorithm(s) should be evaluated to determine the minimum security-requirement for the system. Care should be taken not to underestimate the lifetime of the system or the sensitivity of information that it may need to protect. In general encryption is computationally expensive, so it would be in an architect's best interest to identify all the critical program data that must be encrypted to safeguard this data, along with ensuring a high-performance state can still be achieved.

Best Practices

  • Identify critical information that needs to be safeguarded from bad actors.
  • Generate data flow diagrams of this critical data to gain a holistic understanding of data travel throughout your system.
  • Assign criticality to all the data in a system.

Preparing for an Audit

An audit is a technical assessment of an organization?s infrastructure, operating systems, applications, and more. Risk management audits force us to be vulnerable, exposing all our systems and strategies. They help us stay ahead of insider threats, security breaches, and other cyberattacks that put our company?s security, reputation, and finances on the line. This list of best practices will instruct the best practices on how to best prepare for an up-and-coming audit.

Best Practices

  • Record all audit details, including who?s performing the audit and what network is being audited, so you have these details on hand.
  • Document all current security policies and procedures for easy access.
  • Evaluate activity logs to determine if all IT staff have performed the necessary safety policies and procedures.
  • Analyze your security patches to ensure everything is up to date.
  • Conduct a self-test on your existing software to identify any vulnerabilities.
  • Search for any holes within your existing firewall.
  • Double-check exactly who has access to sensitive data and where said data is stored within your network.
  • Conduct a scan to identify every network access point.
  • Regularly review event logs to keep human error at a minimum.

Restrict Access to Sensitive Data

Data protection recommendations focus on addressing issues related to encryption, access control lists, identity-based access control, and audit logging for data access.

The implementation of an efficient identity and access management system can help you limit access to critical data. Plus, it'll help you understand who exactly accesses and works with your business's critical data. A high-level granularity of access management allows granting elevated privileges only to users that actually need it. Also, you can use cloud data encryption both in transit and at rest for additional protection of sensitive information.

Best Practices

  • Protect sensitive data by restricting access using role-based access control (RBAC), network-based access controls, and specific controls in services.
  • Segregate data to provides the scale and economic benefits of multi-tenant services while rigorously preventing customers from accessing one another's data.
  • Grant access to users, groups, and applications at a specific scope.
  • Use a secure management workstation to protect sensitive accounts, tasks, and data.
  • Enforce security policies across all devices that are used to consume data, regardless of the data location (cloud or on-premises).
  • Use host-based data loss prevention to enforce access control.
  • Implement Privileged Access Management for the most sensitive data.
  • Ensure end-to-end visibility into the infrastructure, critical data, and applications.

Traces

Traces are a representation of logs; the data structure of traces looks like that of an event log. A single trace can provide visibility into both the path traversed by a request as well as the structure of a request. The path of a request allows software engineers and SREs to understand the different services involved in the path of a request, and the structure of a request helps one understand the junctures and effects of asynchrony in the execution of a request.

When a problem does occur, tracing allows you to see how you got there: which function, the function?s duration, parameters passed, and how deep into the function the user could get. A common tracing tool is the Profiling API in .NET.

Best Practices

Application Insights

  • Implement diagnostic tracing logs for your ASP.NET/ASP.NET Core application from ILogger, NLog, log4Net, or System Diagnostics Trace to Azure Application Insights.
  • Configure Application Insights to integrate ILogger traces.
  • Configure System Diagnostics Tracing Event Source events to be sent to Application Insights as traces.
  • configure System Diagnostics Diagnostic Source events to be sent to Application Insights as traces.
  • Configure Event Tracing for Windows (ETW) events to be sent to Application Insights as traces.

Azure Monitor [edit** | edit source | share]**

  • Implement AzureLogHandler for Python applications to send diagnostic tracing logs in OpenCensus Python for Azure Monitor

Define data classifications and map those classifications to access control requirements

Having a well-understood set of data classifications is important to understand how to implement the levels of protection with your RBAC (or other control) system.

Best Practices

  • Early in the engagement, brainstorm what data classification levels will be used by your application.
  • For each of those levels, decide what level of protection is appropriate for that data; key strength, time to respond to revocation, and period of validity are all important considerations
  • Higher sensitivity data may require a stronger (larger) encryption key than lower sensitivity data
  • More information on managing data classification can be found in the Data Security and Encryption Best Practices section of Azure Docs

Adopt a Zero Trust Approach

Zero Trust drives organizations to take the ?assume breach? mindset, but this approach should not be limiting. If you adopt a Zero Trust network, you eliminate and protect corporate data while ensuring that organizations can build a modern workplace using technologies that empower employees to be productive anytime, anywhere, any which way.

Best Practices

  • Identify the data and assets that are valuable to the organization: customer database, source code, smart building management systems.
  • Classify the level of sensitivity of each asset ?highly restricted?, ?restricted?, ?confidential?, ?internal?, ?public?
  • Group assets with similar functionalities and sensitivity levels into the same micro-segment.
  • Deploy a segmentation gateway, whether virtual or physical, to achieve control over each segment.
  • Define an appropriate access policy for cloud application users, developers, and admins.

NEVER store application secrets in code or configuration files

Remember that once things are checked into public source control, you can never claw them back; they must be rotated or discarded. Keeping secrets out of even commit histories is critical to keeping software secure.

Best Practices

  • You can use tools like CredScanTruffleHog, or GitRob to scan your repositories for secrets
  • Keep application secrets in a secrets vault such as Azure Key Vault

Create a Key Management Plan

Encryption Key Management is administering the full lifecycle of cryptographic keys. The proper management of cryptographic keys is essential to the effective use of cryptography for security. Keys are analogous to the combination of a safe. If a safe combination is known to an adversary, the strongest safe provides no security against penetration. Similarly, poor key management may easily compromise strong algorithms. Ultimately, the security of information protected by cryptography directly depends on the strength of the keys, the effectiveness of mechanisms and protocols associated with keys, and the protection afforded to the keys. All keys need to be protected against modification, and secret and private keys need to be protected against unauthorized disclosure. Key management provides the foundation for the secure generation, storage, distribution, use and destruction of keys. This section is all about Public Key Infrastructure. As stated in the previous section a secure boot process should be architected out to include the general key questions.

Best Practices

Create a key management plan and have discussions with your hardware, software, and firmware teams to ensure the base plan can work for everyone alongside securing the IoT device as a whole. Include the following topics in your key management plan:

  • Define the security services that may be provided and key types that may be employed in using cryptographic mechanisms.
  • Provide background information regarding the cryptographic algorithms that use cryptographic keying material.
  • Classify the different types of keys and other cryptographic information according to their functions, specifies the protection that each type of information requires and identifies methods for providing this protection.
  • Identify the states in which a cryptographic key may exist during its lifetime.
  • Identify the multitude of functions involved in key management.
  • Discuss a variety of key management issues related to the keying material. Topics discussed include key usage, cryptoperiod length, domain-parameter validation, public key validation, accountability, audit, key management system survivability, and guidance for cryptographic algorithm and key size selection.

Monitor Sensitive Data

Monitoring and managing access to sensitive information in the cloud is an essential part of cybersecurity policy in any organization. You need to have a clear visibility of what information is stored where and who can have access to it. Being able to track every action involving critical data is also important. Look for a complex user activity monitoring solution that allows setting specific rules and restrictions, personalizing access to shared accounts, and continuously tracking any activity involving your company's sensitive information.

Best Practices

  • Continuously monitor for unauthorized transfer of data to locations outside of enterprise visibility and control.
  • Alert on anomalous transfer of information that might indicate unauthorized transfers of sensitive information.
  • Implement a HIDs solution to ensure disk data integrity.
  • Grant the least privilege that's required to complete task; audit and log access requests.
  • Use monitoring tools to expose suspicious activity and unauthorized attempts to access data, and flag them.
  • Do regular scans to ensure that no plaintext data is on your systems.

Advanced Threat Protection

Advanced threat protection (ATP) refers to a category of security solutions that defend against sophisticated malware or hacking-based attacks targeting sensitive data. Advanced threat protection solutions can be available as software or as managed services. ATP solutions can differ in approaches and components, but most include some combination of endpoint agents, network devices, email gateways, malware protection systems, and a centralized management console to correlate alerts and manage defenses. A popular solution for Advanced Threat Protection is an intrusion detection system (IDS) is a device or software application that monitors a network or systems for malicious activity or policy violations. The primary benefit offered by advanced threat protection software is the ability to prevent, detect, and respond to new and sophisticated attacks that are designed to circumvent traditional security solutions such as antivirus, firewalls, and IPS/IDS. Attacks continue to become increasingly targeted, stealthy, and persistent, and ATP solutions take a proactive approach to security by identifying and eliminating advanced threats before data is compromised.

Best Practices

  • Adopt advanced threat protection measures in your applications.
  • Adopt an Anti-Virus Solution for your system.
  • Adopt and IDS solution for system.
  • Ensure Firewalls are properly hardened.
  • Turn on critical logs to be monitored.

Metrics

Metrics are a numeric representation of data measured over intervals of time. Metrics can harness the power of mathematical modeling and prediction to derive knowledge of the behavior of a system over intervals of time in the present and future. The biggest advantage of metrics-based monitoring over logs is that unlike log generation and storage, metrics transfer and storage have a constant overhead. Unlike logs, the cost of metrics doesn?t increase in lockstep with user traffic or any other system activity that could result in a sharp uptick in data.

Best Practices

  • Collect cyber security metrics which measure value that demonstrates how well a company is achieving its cybersecurity risk reduction goals.
  • Collect metrics on critical systems to help you better understand what is working, which assets are most at risk, and which areas are worsening.
  • Collect meaningful metrics across the system to provide quantitative information that you can use to show management, board members, customers, and even shareholders that you take confidentiality, integrity, and availability seriously.

DDoS Protection

DDoS Protection. For enterprises confronted with massive, distributed denial of service (DDoS) attacks, finding solutions that offer DDoS protection is critical to protecting revenue, productivity, reputation, and user loyalty. Hackers stage DDoS attacks by hijacking unprotected computers and installing malware.

Best Practices

  • Ensure your network has plenty of bandwidth.
  • Build redundancy into your infrastructure
  • Configure your network hardware against DDoS attacks
  • Deploy anti-DDoS hardware and software modules
  • Deploy a DDoS protection appliance
  • Protect your DNS servers

Define Key Rotation & Expiration

Limiting the number of messages encrypted with the same key version helps prevent brute-force attacks enabled by cryptanalysis. Key lifetime recommendations depend on the key's algorithm, as well as either the number of messages produced, or the total number of bytes encrypted with the same key version. For example, the recommended key lifetime for symmetric keys in Galois/Counter Mode (GCM) is based on the number of messages encrypted, as noted by NIST. If a key is compromised, regular rotation limits the number of actual messages vulnerable to compromise.

Best Practices

  • Implement key rotation/rolling into your architecture to prevent stale and predictable keys with your most critical data. A key rolling algorithm can complicate a reverse engineer's life by ensuring the keys are always changing, this along with PUF based keys will lock down any leaked data.
  • Remove any static and developer keys before release.
  • All keys required to authenticate to external services should expire on a regular cadence so that any leaked credentials are not valid indefinitely.
  • Design in automated credential rotation from the beginning of a project so that the delivered solution can recover from a security breech quickly.
  • Create integration checks to see if the key is compromised; if so, disable it and revoke access to it as soon as possible.
  • We recommend that you rotate keys automatically on a regular schedule.

Choose key strengths based on sensitivity of data

Not all things require a high degree of security; algorithmic complexity and size are both meaningful tradeoffs to higher key strengths.

Best Practices

  • Understand the sensitivity of each type of data and how that maps to any cryptographic requirements (including regulatory, if they exist)
  • Key strength and cryptographic algorithm selection go together; ensure you understand how they influence each other. The document from OWASP (p.52) has excellent recommendations about how to balance these two things

Maximize Uptime and Performance

If a service is down, information can't be accessed. If performance is so poor that the data is unusable, you can consider the data to be inaccessible. From a security perspective, you need to do whatever you can to make sure that your services have optimal uptime and performance.

A popular and effective method for enhancing availability and performance is load balancing. Load balancing is a method of distributing network traffic across servers that are part of a service.

Best Practices

  • Deploy your technology across multiple locations to prevent downtime.
  • Manage traffic in real-time by deploying global load balancing using real-time performance data so that traffic will flow to the best performing data center.
  • Run routine maintenance checks to catch performance and security issues before they become disasters.
  • Invest in a server monitoring tool to keep you aware of outages and other incidents on both internal and remote servers, so that you can resolve them more quickly.
  • Being organized can go a long way in improving uptime and performance.
  • Evaluate changes before they?re implemented.
  • To achieve zero-downtime, use a virtual hosting platform. Virtualization protects against downtime because, with a virtual copy of your data in the cloud, you?ll always be able to access it even if your server goes down.
  • Conduct regular internal audits so that you can identify security issues before they become threats.

Organize access to resources to minimize impact of compromise

Identify the least amount of access required for each application/role to complete its actions. If this role is compromised, an attacker is limited in the scope of their subsequent attacks.

Best Practices

  • Access to secrets should be segregated by at least application, if not role
  • Only grant the identity access to the resources it manages/uses
  • Where appropriate, use virtual networks or other network segregation approach
  • Goal is to reduce the ability for an attacker to use a single vulnerability as a foothold to compromise other parts of the system
  • More information on lowering privileged account exposure can be found in Identity Management best practices

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.