top of page
Frequently Asked Questions
-
Q1. What is Adversity Discovery and Assessment?Organizations need to identify the existence and location of critical assets to ensure that said assets are monitored and protected based on each asset’s business risk rating. Discovering assets enables organizations to establish an inventory, which can be used to assess and mitigate associated risks to the organization. An asset inventory enables organizations to configure scans to probe for common weaknesses in the platform or application. Underscore Adversity Discovery & Assessment (ADA) enables organizations to discover every asset with 100% visibility and rich context to track behaviour, detect risks, and prioritize action to help protect your critical infrastructure & assets. ADA enables security teams to quickly identify and remediate those vulnerabilities that are most likely to be exploited and negatively impact the business.
-
Q2. How to find all devices in your network?Finding all devices in your network can be challenging because hundreds, if not thousands, of devices are connected to enterprise networks at any given time. These can be end-user devices, such as laptops and phones, or network-capable Internet of Things (IoT) assets, such as smart TVs, printers, and security cameras. While end-user devices such as computers support traditional cybersecurity agents, many other operational technology (OT) and IoT assets are left unprotected and unmanaged, making it difficult to quickly find all devices on the network .
-
Q3. How does Adversity Discovery works?Adversity discovery works by using in-depth automated OSINT techniques to search the internet to identify and map assets that comprise your organizations digital perimeter. Assets can be defined as domains, IPs and FQDNs. Managing asset discovery manually is a complex and time-consuming task and can often lead to unknown assets not being identified which could lead to increased risk. In order to maintain an accurate asset inventory asset discovery needs to be conducted regularly as an organizations digital perimeter is constantly evolving. It’s critical to have tools in place that can identify and alert you of new services but also changes to know assets that could pose a risk. IT asset discovery tools automate the identification and cataloging of an organization’s digital assets. These solutions work by gathering information through a combination of network discovery methods (agent-based vs agentless). IT asset management discovery tools are crucial for modern enterprise cybersecurity, which is marked by an expanding attack surface due to a proliferation of cloud computing, bring-your-own-device (BYOD) policies, interconnected systems, and air gapped environments.
-
Q4. Why should I use Adversity Discovery?This form of passive automated inventory management offers a wealth of benefits as a more efficient way to take control of your security posture. Below are some of the main advantages of continuously mapping assets, outlining its importance in today’s cyber-climate. Gain real-time attack surface visibility of your digital environment Take control of your network Streamlined and Simplified Asset Management Be proactive with always-on automated security Harness scalability with asset monitoring Ensure compliance with data security regulations
-
Q5. Why do enterprises need IT asset discoveryEnterprises need IT asset discovery as a part of the larger IT asset management (ITAM) process that aims to manage and optimize all assets across the enterprise system. Since you can only secure and optimize what you can see, ITAM always starts with discovering assets and gaining comprehensive network visibility. IT discovery tools have the following benefits for enterprises: Improved security: By understanding all assets within an organization’s network through continuous monitoring, security teams can identify and prioritize potential security risks and vulnerabilities. Increased efficiency: Organizations can use the information provided by asset discovery software to optimize resource allocation, reduce downtime, and improve overall efficiency. Enhanced compliance: IT discovery tools help organizations comply with various regulations and standards, such as the General Data Protection Regulation (GDPR), by tracking assets and providing audit logs of all their activity. Streamlined planning and budgeting: IT asset discovery provides valuable information that organizations can use to plan and budget for future technology initiatives. These tools also help reduce operational costs by discovering overbilled software licenses, underutilized assets, and unauthorized cloud-based resources.
-
Q6. What key functions Adversity Discovery tool performs for your organization?Discovering assets on your digital perimeter enables a holistic approach to cyber security and allows you to identify and prioritize assets that may be at risk. Knowing this information can help you take remedial action before an incident occurs. Asset discovery tools can automate many of the tasks associated with maintaining an inventory of your assets. This includes tracking asset location, ownership and assigning asset criticality. They can also help you as regards auditing and compliance purposes, generating reports that show which assets are compliant with internal security policies or external regulations. Overall, asset discovery is a critical practice for organizations looking to protect their systems and data from cyber threats. By identifying and mapping all the technology assets that exist outside the organization’s firewall, security and IT leaders can gain visibility into their digital footprint, identify vulnerabilities, and implement measures to protect against cyber-attacks.
-
Q7. Philosophy behind Underscore ADA?An organization identifies assets by IP address. While that may be true, it really isn’t. The entity on the network is a physical device that runs an OS and has a set of applications & services installed. This solution should pin all other assets properties to a device. A service may be hosted by many applications. For example, when we say that we have 10 web servers, the question is – are they all running the same software? Do they have similar (if not identical) risk profile? How many are having weakness & how many don’t? Should I be worried about all these weaknesses? ADA is expected to answer these and more questions. Leveraging its enumeration scans, wifi scan and cloud sensor, the solution should provide a comprehensive list of risks and actionable for the user.
-
Q8. Why should we invest in ADA?Underscore ADA shines a light on blind spots in your attack surface and highlights potential risks using over fifty data sources to keep you informed in real-time. Our scalable technology accelerates asset discovery time, completing it in a fraction of the time that manual techniques take. Underscore ADA provides businesses with a real-time view of their managed and unmanaged assets, including IT, OT, and IoT, discovering up to five times more assets than traditional methods. The solution provides a single trusted source of asset information, helps organizations manage their technical debt, and improves security hygiene by identifying security gaps Underscore offers high-performance, reliable and scalable ADA solution. Get a complete, accurate view of assets and vulnerabilities in your inventory Prioritize remediation efforts based on business risk Reduce mean time to remediation Improve your overall risk posture Automate your cybersecurity with an easy-to-use platform. Benefit from robust reporting capabilities
-
Q9. Can the number and types of interfaces in ADA be customized to specific requirements?Yes, ADA supports the customization of multiple interfaces to meet specific needs while maintaining strict control over unauthorized traffic. Management Interface: A dedicated port exclusively for configuration and administration, featuring secure web and shell-based access. Only authorized management traffic is permitted, with Role-Based Access Control (RBAC) implemented according to roles and responsibilities. Production Interfaces: Additional interfaces are available to manage data from various network segments, customized to align with production requirements. All interfaces are designed to prevent the processing of unauthorized or unsanctioned traffic in any form.
-
Q10. Does ADA ensure that user information is not temporarily stored, and prevent unauthorized or unauthenticated requests through stored data in the browser or other temporary storage mechanisms?Yes, ADA ensures that no information is accessible without proper authorization. No cache is stored, preventing end users from directly using APIs to gain unauthorized access to ADA and its resources.
-
Q11. Can ADA store its own logs for a defined time period?Yes, ADA stores logs for a defined duration (e.g., 180 days or more). Additionally, it supports integration with organizational security systems such as SIEM/SOAR platforms and external syslog servers for log storage and analysis.
-
Q12. Does ADA rely on a specific database, and can it support backup and restoration using external storage?No, ADA operates on a serverless architecture, with data stored in an encrypted format. It allows seamless backup and restoration to external storage without relying on any specific database.
-
Q13. Can the solution synchronize its internal clock with a central NTP server?Yes, ADA includes functionality to configure NTP server details for time synchronization.
-
Q14. Can the solution detect unauthorized Wi-Fi devices across network segments, identify enterprise assets connected to them, and integrate with decoys to detect threats and identify at-risk services or applications?Yes, ADA can detect rogue access points and provides visibility into endpoints connected to these rogue access points, as well as those connected to the sanctioned network. ADA showcases endpoints, services, applications with their versions, OS, and type. Additionally, ADA's decoy system detects threats associated with endpoints, including services and application ports.
-
Q15. Does the solution ensure that device data, service details, or vulnerabilities are not shared with third parties or external services?Yes, ADA does not communicate with any third-party or external services for its functionality, ensuring complete confidentiality of device data, service details, and vulnerabilities.
-
Q16. How is ADA priced?ADA has these priced components – ADA Appliance/Virtual-Appliance – Priced at event-rate. Minimum 1,000 events/second Threat Intelligence Subscription – Renewable annually Support – Renewable annually Professional Services – Optional. Usually required when the logging and storage requirements need to be designed for scalability and high-performance for one or more sites. Alternatively, the services may be required if the buyer’s log-modelling needs require tuning of log patterns to improve the model Engineering Services – Optional. Usually required when one is contemplating setting up security big-data lake using tools/backend databases such as Elasticsearch/Kibana or, MongoDB
-
Q17. What capabilities does the ADA solution offer to improve data management, asset monitoring, security, and system health management?The ADA solution provides the following capabilities: Query and Data Filtering: Users can create customized queries and apply advanced filters to efficiently retrieve relevant data for analysis. Data Export: The solution supports exporting and importing data in widely used formats such as TSV and CSV, facilitating integration with other analytics frameworks. Additionally, ADA can integrate with third-party software to fetch and process relevant data. Data Security: To ensure robust security, ADA avoids caching user data and prevents the use of browser cache or other temporary storage mechanisms for executing unauthorized or unauthenticated queries. Asset Profile Timeline: ADA offers a historical timeline view of an asset\u2019s profile, highlighting changes in configurations, platforms, operating systems, applications, vulnerabilities, and risk levels over time. Alarm Mechanisms: The solution includes alarm mechanisms to detect and alert users about misbehaving health parameters, ensuring timely interventions and resolutions. Visual Health Indicators: The solution provides intuitive visual indicators for monitoring critical system components such as power supplies, network cards, CPUs, RAM, storage, and essential processes, enabling effective system health management. Debugging and Troubleshooting: ADA facilitates debugging and troubleshooting directly through its user interface, streamlining the resolution of issues.
-
Q1. What is Event Log Aggregator?Event Log Aggregator (or ELA for short) is a solution that enables organizations to comply to regulatory or legal mandates wherein the requirement is to retain log data for longer duration (many months or years).
-
Q2. How does ELA work?ELA has an inbuilt universal syslog receiver. Hosts, Devices or Applications whose logs needs to be retained should be forwarded to ELA over syslog (UDP port 514 default). Upon receiving log from a new forwarder, ELA automatically creates a hierarchy which is a directory with the IP Address of forwarding host. Within this directory there are application log files. Each file name prefix is the name of application as in the syslog message. The file suffix is “.log” by default.
-
Q3. How long can ELA retain files?This depends on the storage the buyer allocates/procures for the ELA. By default, ELA is configured to purge files after 7 days. But that value is configurable and in order to satisfy compliance needs, buyer can make that change at runtime
-
Q4. Continuously writing to log-files will make then bigger. How does ELA handle large files?ELA implements a daily log rotation policy wherein at day-boundary the log file is automatically rotated, and incoming log messages are written to a fresh file. The previous file is renamed with the suffix “.log.1” where the trailing “.1” indicates first rotation. For each day that ELA rotates the log file, the trailing numeric suffix is incremented.
-
Q5. Does ELA use compression?Yes. While rotating log-files ELA compresses the file after it is a week old. This is done to better utilize available storage.
-
Q6. What capabilities does ELA offer besides retaining logs?As log messages are ingested, ELA automatically identifies “observables” in real-time and compares them to Threat Intelligence database to help detect threats/risks that may have been overlooked.
-
Q7. What Security Analytics does ELA offer?None. ELA is not a Security Analytics or a SIEM solution. It is a lightweight tool specifically designed to meet compliance requirements by ingesting and retaining massive volumes of log data and furnishing required evidence at the click of a button without delays. To satisfy Security Analytics needs, ELA can forward log messages to a security big-data lake for the relevant tasks. From a technical perspective, to perform analytics on huge volume of data requires normalization, format conversions and, database to store relevant observables and text from message resulting in significant cost overhead towards compute, storage, memory, and licenses for the buyer. If the need is to comply to regulatory/legal mandates, ELA should be the preferred solution. The reader may note that we do offer services to build a security big-data lake along with required analytics besides offering ELA as a standalone product.
-
Q8. Does ELA offer any Application Performance Monitoring capability?None. ELA was designed to be a very efficient and light-weight log aggregator solution that ingests log messages irrespective of their form/format. ELA creates an evidence package containing required host/application log-files “on-demand” without the need to look for data in multiple places thereby saving considerable time during Incident Response.
-
Q9. This question may be repetitive, but we need the clarity – How is it different from SIEM?SIEM tool uses a bunch of techniques to analyze, detect, prioritize and in some cases respond to security risks. Before any of that is performed, ingested messages need to be parsed and normalized such that key features within the log messages are identifiable by application/processes that perform the required tasks. All these activities have high resource requirements – compute and memory and often they are run on multiple systems which aren’t cheap. Let aside the expenditure incurred to manage the technology and its outcome. So, if the buyer intends to set-up a Security Operations, they are better-off investing in SIEM. If their intent is to comply to regulatory/legal mandate without much fuss and operational overhead, ELA is the tool they should contemplate purchasing.
-
Q10. Why should we invest in ELA when other tools provide in-depth analytics capabilities?Fundamental objective of analytics solution is often based on purpose – SIEM is Security Monitoring, Response and Regulatory compliance; APM is monitoring of application performance and capacity planning/failure handling; Log Management is application or business insights. These are specialized tools requiring careful planning, deployment, fine-tuning before they’re put to good use and, their operational expense doesn’t come for cheap. When they’re required to ingest data that doesn’t fit the use-case they’re designed to support, the outcome isn’t very heartening – Either the operational expense goes up in order to maintain the desired level of outcome or, the quality of tool’s service (performance, reports, etc.) take a hit resulting in high-cost – low-yield scenario. ELA on the other hand is specifically a compliance tool meant to ingest and retain data for a long time in a no-nonsense manner requiring very little administrative overhead and operational expense. It’s simple and intuitive user interface quickly allows access to relevant log-messages at the click of a button enabling a faster response to legal/regulatory mandates.
-
Q11. But ELA does come with Threat Intelligence Subscription. Why not other analytics?It is true that ELA automatically identifies “observables” at run-time and looks up threat-intelligence for a verdict on the observables to provide an added layer of detection. It isn’t entirely true that ELA is devoid of analytics – ELA supports Log Modeling which aggregates log messages and provides a glimpse into the nature of log messages being captured by an application. It is meant to help decide whether to retain an application log data. By configuring hosts to stop sending an application’s log messages which has irrelevant messages saves storage and bandwidth to ensure better utilization of the system. As for other analytics are concerned, ELA integrates with security big-data lake solutions to allow it to be used as a data repository on which analytics provide meaningful outcome.
-
Q12. How is ELA priced?ELA has these priced components – ELA Appliance/Virtual-Appliance – Priced at event-rate. Minimum 1,000 events/second Threat Intelligence Subscription – Renewable annually Support – Renewable annually Professional Services – Optional. Usually required when the logging and storage requirements need to be designed for scalability and high-performance for one or more sites. Alternatively, the services may be required if the buyer’s log-modeling needs require tuning of log patterns to improve the model Engineering Services – Optional. Usually required when one is contemplating setting up security big-data lake using tools/backend databases such as Elasticsearch/Kibana or, MongoDB
-
Q13. How can we utilize threat detections by ELA?ELA Threat Detections are retained locally and viewed in Explorer window. Forwarding alert data to another syslog receiver.
-
Q1. What is a Threat Intelligence Aggregator?A Threat Intelligence Aggregator is a tool or service that collects, normalizes, and consolidates threat intelligence data from various sources to provide a unified and comprehensive view of potential security threats and vulnerabilities.
-
Q2. Why do organizations need a Threat Intelligence Aggregator?Organizations need a Threat Intelligence Aggregator to stay informed about the latest cyber threats, vulnerabilities, and attack techniques. It helps them proactively protect their systems and data by providing timely and relevant threat information.
-
Q3. What types of sources does a Threat Intelligence Aggregator gather data from?Typically, Threat Intelligence Aggregators collect data from various sources, including open-source feeds, commercial threat intelligence providers, internal sources, government agencies, forums, and dark web data, to ensure a holistic view of the threat landscape.
-
Q4. How does a Threat Intelligence Aggregator help improve cybersecurity?By aggregating and analyzing data from diverse sources, a Threat Intelligence Aggregator enables organizations to identify patterns, trends, and potential threats. This information allows for better threat detection, response, and vulnerability management.
-
Q5. Can a Threat Intelligence Aggregator be customized for specific industries or organizations?Yes, many Threat Intelligence Aggregators offer customization options to tailor threat feeds and alerts to specific industries, sectors, or an organization's unique needs.
-
Q6. How often is threat intelligence data updated within the aggregator?The frequency of updates can vary, but most aggregators provide near-real-time or daily updates to ensure that organizations have access to the latest threat information.
-
Q7. How can organizations integrate a Threat Intelligence Aggregator into their existing cybersecurity infrastructure?Integration methods vary but often include APIs, SIEM system integration, and other data-sharing protocols. The aggregator should be compatible with existing security tools.
-
Q8. What features should I look for in a Threat Intelligence Aggregator?Look for features like data normalization, customizable alerts, threat analysis, reporting, and the ability to correlate threat data with your organization's infrastructure for better context.
-
Q9. How does a Threat Intelligence Aggregator help with incident response?It aids incident response by providing real-time alerts on emerging threats, allowing security teams to react swiftly to mitigate potential risks and vulnerabilities.
-
Q10. Is a Threat Intelligence Aggregator suitable for all sizes of organizations?Yes, Threat Intelligence Aggregators are available in various scales, making them suitable for small, mid-sized, and large organizations. Choose one that fits your organization's needs and resources.
-
Q11. What types of threats does a TIA help protect against?TIAs help protect against a wide range of threats, including malware, ransomware, phishing attacks, data breaches, DDoS attacks, and zero-day vulnerabilities.
-
Q12. What benefits can an organization expect from a TIA subscription?Benefits include enhanced threat detection, faster incident response, reduced vulnerabilities, improved risk management, and overall strengthened cybersecurity posture.
-
Q13. Is a Threat Intelligence Aggregator subscription suitable for small businesses?Yes, TIAs can be valuable for organizations of all sizes. Many providers offer plans that cater to the specific needs and budgets of small businesses.
-
Q14. Can I try a TIA before committing to a subscription?Many TIAs offer trial periods to let organizations assess the service's suitability for their needs. Be sure to check with the provider for trial options.
-
Q1. What is the importance of time in a network?Every modern computing or communication device comes with a built-in clock. This clock serves multiple purposes – either runs a scheduler, timestamps events or, uses it as an event source for human or machine communication with other devices and users, within and outside of the organization.
-
Q2. Understood that “Time is important”. But, what’s this focus on time about?So long as everything is normal, no one really cares… honestly! However, when things go wrong and there’s a need to troubleshoot a problem or investigate an issue, experts across domains/business functions look at timestamps to narrow-down the time window of their investigation and depending on the nature of issue, look at logs, events, system messages from one or many hosts or applications to determine the root cause and fix the issue. Imagine a scenario when timestamp across multiple devices is out of sync or worst, way-off. It’d result in many hours of extra effort to determine affected systems or applications, identify root cause and measures to remediate resulting in productivity or revenue loss. In short, this could be very frustrating.
-
Q3. Okay, understood. So, what are our options?Plenty. In fact, every modern OS comes with a built-in capability to either synchronize its time to a central server in a scheduled manner or, certain servers can enforce their timestamp on computers that are members of their directory services. So, it’s relatively easy to have a central autonomous source within an environment.
-
Q4. Is there still some issue?Yes. In most cases, specifically for organizations that have multiple locations it becomes difficult for mechanisms at one location to remain an authoritative source of time for all the enterprise’s assets. This results in similar issues as mentioned in response to question #2. So, it’s being back to the starting point but, on a larger scale.
-
Q5. Well… What are/is the way out?A simple way is to implement an NTP server in-house that refers a publicly available authoritative source (usually an atomic clock) at each of organization’s locations. This way, there are little or no risks to time being out of synchronization.
-
Q6. Is that expensive?No! Not at all. Most of the publicly available NTP servers are free and open-source software and are bundled on most Linux/BSD variants. Besides, most Windows Platforms now bundle NTP client and look-up to a publicly available service from Microsoft for authoritative time. Besides, the service isn’t resource hungry and can easily be setup on any available system/VM.
-
Q7. But, what’s the catch then?Again, depending on individual or organization perception driven by security policy and paranoia besides operational complexity (doubt over open-source software, inaccessibility to Linux/BSD skills, etc.). From the standpoint of security perception, most NTP servers run on insecure UDP protocol thereby leaving an open channel back to the enterprise – something not many security practitioners are comfortable with. On the operational challenge front, many enterprises/businesses may lack necessary skills to set it up on the network.
-
Q8. What is UTA and how does it help?UTA or Unfailing Time Availability is Underscore’s NTP/PTP server that delivers a robust time to the enterprise without the need for leaving open ports and the complexity of managing/setting an open-source software. Organizations can set-up their Active-Directory services or Linux/BSD or other IT/OT/ICT systems to use UTA as authoritative NTP/PTP server to synchronize time throughout the organization. Given its small footprint and practically zero administrative effort, it can be deployed at multiple locations without much effort.
-
Q9. Where does UTA gets its time from?UTA looks up to the Sky! It uses GPS/GNSS and very soon NavIC constellation of geo-positioning satellites for time. These satellites have atomic clocks and manage a very precise time that is beamed down in pulses at frequent intervals. The GPS Sensor of UTA receives time and position coordinates from the satellite, discards position and pushes the time over NTP/PTP Protocol on the network.
-
Q10. Is there any redundancy with Satellites?Let’s just say that they’re more resilient than most infrastructure we’ve come across. Down on Earth, we’ve the option to look up to two or more constellations if we’re paranoid about losing one. So, you could have a setup with two sensors – one for GPS (North America) and, other for GNSS (Global, including European Union and Russia’s positioning satellites).
-
Q11. What are the advantages of using UTA?One, you don’t need to open ports on your firewall and worry about monitoring insecure ports. Second, operational complexity is taken care of since there’s support available.
-
Q12. Is it difficult/hard to maintain? What could be the challenges?Good question. It’s not difficult to maintain so long as the sensors have clear line of sight to the sky. They need to be deployed outdoors or indoors with wide windows. On a cloudy day, there may be a possibility of signals being poor. Lightning strikes on sensors outdoors could fry the sensors. But those incidents are too few and far between.
-
Q13. Any other challenge?Yes. If you’re paranoid, GPS is prone to jamming.
-
Q14. What if GPS signals are lost?Each UTA appliance has an inbuilt hardware clock. The clock maintains time precise enough for most enterprise use-cases (sub-millisecond drift at most) and should suffice till GPS signals are locked again.
-
Q15. How many UT appliances should an enterprise deploy?That depends. One can have almost all devices synchronize time with UTA. Or, can synchronize their Active-Directory or, any other NTP server/servers with UTA and use these as level 1 network-wide time sources. For most environments, one (maximum two) UTA servers per location should suffice.
-
Q16. How is UTA priced?UTA is priced based on the number of GPS sensors desired and the strength of inbuilt clock. For most enterprise use-cases, standard UTA system with a single GPS sensor and hardware clock is good enough.
bottom of page