Website scanners are essential technology in thwarting cybersecurity attacks against web applications. And these types of attacks are a major problem. According to Forrester Research, web applications are a leading vector of incursion.
Worse, such attacks have grown steadily over the past few years. And even more than software vulnerabilities – which offer a huge attack vector – it is web applications that are the usual avenue of external entry.
To help protect against these attacks, let’s take a look at the website scanner market, then do a deep dive into the leading website scanner software.
There is often a confusion about the various tools in the IT security arsenal. Terms such as website scanner, vulnerability scanning tool, website vulnerability scanner, and web application scanner are used interchangeably. But this is an error.
Vulnerability scanners and website vulnerability scanners are different. A website scanner does a remote scan of a website and often provides a graphic that can be included to show the site has been scanned. Vulnerability scanners, on the other hand, scan the IT network, endpoints, and infrastructure as they look for vulnerabilities.
Also see: 5 Cloud Security Trends in 2022
What is Vulnerability Scanning?Vulnerability scanners monitor applications and networks constantly to identify security vulnerabilities. They work in a variety of ways.
Many of them maintain an up-to-date database of known vulnerabilities and conduct scans to identify possible risks and exploits. They are typically used by IT to test applications and networks against known issues as well as in helping to identify new vulnerabilities. They also provide reports based on their analysis of known vulnerabilities and potential new exploits.
Vulnerability scanning, then, deals with the inspection of points of potential exploit to identify security holes. Regular scans detect and classify system weaknesses. In some cases, the application offers predictions about the effectiveness of countermeasures. Scans can be performed by the IT department or via a managed service.
Typically, scans are done against a database of information about known security holes in services and ports, as well as anomalies in packet construction, missing patches, and paths that may exist to exploitable programs or scripts.
Some vulnerability scanners detect vulnerabilities and suggest possible remedies. Others attempt remediation and mitigation across the environment. Some provide strong support for audits and compliance via reporting, or are geared towards security standards such as PCI DSS, Sarbanes-Oxley, or HIPAA. Others specialize in the discovery of web-based holes or problems with authentication credentials, key-based authentication, and credential vaults.
Also see: Secure Access Service Edge: Big Benefits, Big Challenges
What Does a Website Vulnerability Scanner Do?A website vulnerability scanner (a.k.a. a website scanner or web application scanner) scans through the pages of a website or web application to detect security vulnerabilities. Such tools are looking for security issues like cross-site scripting, cross-site request forgery (CSRF) or SQL injection. These tools automate the scanning of web applications and test them to search for common security problems. Some offer advanced functions to dive deeper into applications to look for difficult-to-find bugs such as asynchronous SQL injection and blind service-side request forgery (SSRF).
The techniques employed by web scanners include application spidering, applications crawling, discovery of default content as well as common content, and probing web applications for common vulnerabilities. Scanning can be done actively or passively. The passive approach does non-intrusive checks that are useful, but often not thorough enough. Active scans simulate attacks on websites and web applications. Some tools also make use of access permissions to see if further vulnerabilities can be unearthed.
Also see: 5 Ways Social Media Impacts Cybersecurity
Top Website Scanning ToolsWe will include some examples of each type – both vulnerability scanners as well as web application scanners. But we will strongly favor the latter category. Here are our top picks, in no particular order:
BurbThe web vulnerability scanner within Burp Suite uses research from PortSwigger to help users find a wide range of vulnerabilities in web applications automatically. Sitting at the core of Burp Suite Enterprise Edition and Burp Suite Professional, it is used by more than 60,000 users across 15,000 organizations.
Key Differentiators
The Qualys Cloud Platform, combined with its cloud agents, virtual scanners, and network analysis capabilities bring together key elements of an effective vulnerability management program into a single app unified by orchestration workflows.
Key Differentiators
Nessus by Tenable is a widely used vulnerability assessment tool. It is often used by experienced security teams. It can be used in conjunction with pen testing tools, providing them with areas to target and potential weaknesses to exploit. It is used in vulnerability assessments by tens of thousands of organizations. Nessus came to life twenty years back as an open-source tool but has morphed into a proprietary tool.
Key Differentiators
Acunetix by Invicti scans web-based applications. Its multi-threaded scanner can crawl across hundreds of thousands of pages rapidly and it also identifies common web server configuration issues. It is particularly good as scanning WordPress. Acunetix automatically creates a list of all websites, applications, and APIs, and keeps it up to date.
Key Differentiators
Netsparker is a web vulnerability management solution that focuses on scalability, automation, and integration. The suite is built around the web vulnerability scanner and can be integrated with third party tools. Operators don’t need to be knowledgeable in source code.
Key Differentiators
Syxsense is a network vulnerability scanner. It is not a web application scanner, but it can scan web servers to make sure they are patched, and does basic checks like making sure the site has a valid SSL cert. Syxsense also adds patch management, and basic IT management as part of its suite.
Key Differentiators
Intruder is a cloud-based vulnerability scanner that concentrates on perimeter scanning. It performs over 10,000 security checks and is strong at discovering new vulnerabilities. It runs emerging threat scans for newly discovered vulnerabilities. Results are emailed to IT and available on the dashboard. It uses an enterprise-grade scanning engine, the same one used by large enterprises and governments.
Key Differentiators
AppScan has several versions for the enterprise, the cloud, and more. AppScan on Cloud, for example, is a cloud-based application security solution that provides AppScan as a service. AppScan Enterprise enables IT to perform large-scale application scanning, mitigate vulnerabilities, and achieve regulatory compliance.
Key Differentiators
Also see: Tech Predictions for 2022: Cloud, Data, Cybersecurity, AI and More
The post Best Website Vulnerability Scanners 2022 appeared first on eWEEK.
I spoke with Vishal Gupta, Chief Information and Technology Officer, Lexmark, about how companies can optimize their cloud deployment, and we also took a look at key trends driving IoT growth.
Among the topics we covered:
Listen to the podcast:
Watch the video:
The post Lexmark’s Vishal Gupta on IoT and Cloud Trends appeared first on eWEEK.
Data modeling comprises the methodologies of creating data representations for data visualization, which allows users to better understand the global values and associations that create the data’s potential underlying value.
Data modeling is used to define and analyze the data requirements to support data mining and data analytics. The data modeling process involves professional data modelers working closely with business stakeholders as well as potential users of a system.
In this article, we discuss the data model, types of data models, data modeling techniques, and examples.
Also see: Best Data Modeling Tools
What is a Data Model?A data model is a visual representation of data elements and the relations between them. It is the fundamental method used to leverage abstraction in an information system. Data models define the logical structure of data, how they are connected and how the data are processed and stored in information systems.
Data models provide the conceptual tools for describing the model of an information system at the data abstraction level. It enables users to decide how data will be stored, leveraged, updated, accessed, and shared across an organization.
Data models may also provide a portrait of the final system, and how it will look after implementation. It helps in the development of effective information systems by supporting the definition and structure of data on behalf of relevant business processes. It facilitates the communication of business and technical needs for the development of an action plan.
Earlier data models could be “flat data models,” in which data was displayed in the same plane and was therefore limited; flat models could introduce duplications and anomalies. Now, data models are more likely 3-D, and are extremely effective and useful to the development of business and IT strategy.
Also see: Top Data Visualization Tools
What Are the Types of Data Models?The ANSI/X3/SPARC Standards Planning and Requirements Committee described a three-schema concept, which was first introduced in 1975. Those three kinds of data-model instances are conceptual schema, logical schema, and physical schema.
Also see: What is Data Mining?
Conceptual SchemaA conceptual data model or conceptual schema is a high-level description of information used in developing an information system, such as database structures. It is a map of concepts and the relationships between them, typically including only the main concepts and the main relationships.
The conceptual schema describes the semantics of an organization and represents a series of assertions. It may exist on various levels of abstraction and hides the internal details of physical storage structures and instead focuses on describing entities, data types, relationships, and constraints. The conceptual schema design process takes information requirements for an application as input and produces a schema that is expressed in a form of conceptual modeling notation. Below is an example of a conceptual schema:
Logical SchemaA logical data model or logical schema is a representation of the abstract structure of the information domain that defines all the logical constraints applied to the data stored. A specific problem domain expresses information system management or storage technology independently and defines views, tables, and integrity constraints. A logical schema defines the design of the information system at its logical level.
Software developers, as well as administrators, tend to work at this level. Although the data can be described as data records that are stored in the form of data structures, the data structure implementation and other internal details are hidden at this level. Below is an example of a logical schema:
Physical SchemaA physical data model or physical schema is a representation of an implementation design; it defines data abstraction within physical parameters.
A complete physical schema includes all the information system artifacts required to achieve performance goals or create relationships between data, such as indexes, linking tables, and constraint definitions. Analysts can use a physical schema to calculate storage estimates, and this may include specific storage allocation details for an information system.
Also see: What is Data Analytics?
What are Data Modeling Techniques?There are various techniques to achieve data modeling successfully, though the basic concepts remain the same across techniques. Some popular data modeling techniques include Hierarchical, Relational, Network, Entity-relationship, and Object-oriented.
Hierarchical TechniqueThe Hierarchical data modeling technique follows a tree-like structure where its nodes are sorted in a particular order. A hierarchy is an arrangement of items represented as “above,” “below,” or “at the same level as” each other. Hierarchical data modeling technique was implemented in the IBM Information Management System (IMS) and was introduced in 1966.
It was a popular concept in a wide variety of fields, including computer science, mathematics, design, architecture, systematic biology, philosophy, and social sciences. But it is rarely used now due to the difficulties of retrieving and accessing data.
Relational TechniqueThe relational data modeling technique is used to describe different relationships between entities, which reduces the complexity and provides a clear overview. The relational model was first proposed as an alternative to the hierarchical model by IBM researcher Edgar F. Codd in 1969. It has four different sets of relations between the entities: one to one, one to many, many to one, and many to many.
Network TechniqueThe network data modeling technique is a flexible way to represent objects and underlying relationships between entities, where the objects are represented inside nodes and the relationships between the nodes is illustrated as an edge. It was inspired by the hierarchical technique and was originally introduced by Charles Bachman in 1969.
The network data modeling technique makes it easier to convey complex relationships as records and can be linked to multiple parent records.
Entity-relationship techniqueThe entity-relationship (ER) data modeling technique represents entities and relationships between them in a graphical format consisting of Entities, Attributes, and Relationships. The entities can be anything, such as an object, a concept, or a piece of data. The entity-relationship data modeling technique was developed for databases and introduced by Peter Chen in 1976. It is a high-level relational model that is used to define data elements and relationships in a sophisticated information system.
Object-Oriented TechniqueThe object-oriented data modeling technique is a construction of objects based on real-life scenarios, which are represented as objects. The object-oriented methodologies were introduced in the early 1990s’ and were inspired by a large group of leading data scientists.
It is a collection of objects that contain stored values, in which the values are nothing but objects. The objects have similar functionalities and are linked to other objects.
Data Modeling: An Integrated ViewData modeling is an essential technology for understanding relationships between data sets. The integrated view of conceptual, logical, and physical data models helps users to understand the information and ensure the right information is used across an entire enterprise.
Although data modeling can take time to perform effectively, it can save significant time and money by identifying errors before they occur. Sometimes a small change in structure may require modification of an entire application.
Some information systems, such as a navigational system, use complex application development and management that requires advanced data modeling skills. There are many open source Computer-Aided Software Engineering (CASE) as well as commercial solutions that are widely used for this data modeling purpose.
Also see: Guide to Data Pipelines
The post What Is Data Modeling? Types, Techniques & Examples appeared first on eWEEK.
Data modeling tools play an essential role in business, as the volume, velocity and variety of data that organizations manage has reached a tipping point. Identifying the right data at the right time, understanding relationships that extend across data points, and putting data into motion is critical for effective data analytics, and for efficient digital transformation.
Today, organizations that establish strong data modeling frameworks are far better positioned to maximize the value of their assets using data mining tools. Those that miss the mark frequently struggle to extract the maximum value from their business intelligence software. What’s more, they devote more time and resources to the task versus their peers, particularly in data-intensive fields like artificial intelligence.
Let’s take a look at the data modeling market, then survey a list of the top data modeling tools.
Data modeling software delivers a comprehensive framework for collecting, managing and integrating data more effectively. The best tools tie together data from various systems and repositories and deliver specialized modeling and verification capabilities that help an organization make sense of all the data—and the overall data framework.
With a broad and deep view of data sources and connection points, an organization can build conceptual, physical and logical models that deliver value to various constituencies within an enterprise—and out to partners and a supply chain. It’s also possible to spot opportunities as they arise.
Also see: Data Mining Techniques
What Are Data Modeling Tools?Conceptual data models focus on the overall structure of a business and its data. They are used to organize and manage broad business concepts that typically fall within the responsibility of data architects and business leaders.
Logical models extend the data framework of conceptual models by extending visibility into attributes that lie between general relationships. In other words, they drill down to a more practical and functional level. For example, a logical model might define what happens with a specific piece of data when specific events or circumstances occur.
Physical data models refers to the actual implementation of a logical model. They are typically defined by developers and data administrators. A physical model works with specific tools, devices and applications. It refers to the real-world use of data.
The value of effective data modeling is significant. It decreases the odds for data errors and typically speeds the time required to gain insights into business opportunities along with existing business processes. These models also introduce a common structure for collaboration among business and IT groups. This makes it easier to ensure that everyone is marching in the same direction and using data in a consistent way.
Not surprisingly, these tools have become far more sophisticated in recent years. They are able to peer inside cloud-based systems as well as on-premises data frameworks. They typically span different—and in the past incompatible—data types and objects, offering dashboards and reports that drive effective decision-making.
With a view of an organization’s data structure and relationships, it’s possible to optimize data to fit the specific needs of users and groups.
Also see: Data Visualization Software
How to Choose Data Modeling SoftwareAs organizations look to connect disparate systems that rely on different structures and formats, data modeling tools deliver the diagrams and schemas for tying things together in the most seamless and efficient manner possible. They also deliver tools for managing and automating data management and use.
To be sure, it’s critical to select the right solution for your organization’s data modeling requirements. There are five key areas to consider when selecting a vendor and a data modeling solution:
Here are 10 of the top data modeling solutions:
ArchiData modeling advantage: The open-source and cross-platform solution delivers an economical yet powerful framework for tackling complex data modeling. It relies on dynamic visual elements built atop the ArchiMate language, which organizations can adapt for the specific needs of various users, audiences and stakeholders. The framework is widely used among corporations, universities, consulting firms and more.
Pros
Cons
Data modeling advantage: DbSchema aids in the design and management of SQL, NoSQL, and Cloud Database frameworks using JDBC drivers. It offers a graphical interface and powerful capabilities that allow organizations to map and oversee complex data schemas and models. It includes powerful scripts and supports work online and offline.
Pros
Cons
Data modeling advantage: The platform delivers an intuitive and powerful data modeling tool for developers and database professionals. It provides deep insight into database structures through visual database modeling. It is designed to aid in building new data structures and reverse engineering existing databases. DeZign for Databases offers multiple display modes along with powerful pan and zoom features that deliver a “birds-eye” view of diagrams and data structures.
Pros
Cons
Data modeling advantage: erwin Data Modeler helps organizations find, visualize, design, deploy and standardize enterprise data assets. The platform delivers insights into structured and unstructured data residing in relational or NoSQL databases, data warehouses and clouds. It integrates conceptual, logical and physical data models within a visual interface.
Pros
Cons
Data modeling advantage: The data modeling platform supports a wide range of data assets extending across platforms. It offers extensive tools for constructing business glossaries and shared data models for logical, physical and conceptual assets. The platform includes tools for handling forward and reverse engineering, data lineage, and “where used” analysis. The platform uses Unified Modeling Language (UMI).
Pros
Cons
Data modeling advantage: The cloud-based application provides intelligent diagraming capabilities that can be used to deliver insights into data frameworks, cloud infrastructures and business processes. It supports data visualization and real-time collaboration through flowcharts, mockups, UML and other frameworks.
Pros
Cons
Data modeling advantage: The MapBusinessOnline platform is designed to deliver insights into business processes by analyzing data across geographies. The cloud-based application connects to a variety of data sources, including CRM and spreadsheets, to deliver sales territory mapping, business map virtualizations and more. It’s highly filterable and includes robust sharing and collaboration tools.
Pros
Cons
Data modeling advantage: Navicat offers a powerful yet cost-effective platform for modeling data through conceptual, logical and physical models. It supports a wide range of formats, including SQL/DDL, ODBC and specific frameworks such as Oracle, MariaDB, MySQL, SQLite, SQL Server and PostgreSQL. The software supports both forward and reverse engineering and works on all major platforms.
Pros
Cons
Data modeling advantage: Toad Data Modeler has emerged as a leading solution for multi-platform database modeling. It offers powerful visualization capabilities that help data administrators and others examine physical and logical models—along with relationships among databases and other repositories. The solution supports forward and reverse engineering and accommodates large and complex data models.
Pros
Cons
Data modeling advantage: The vendor offers a robust modeling tool that delivers visualizations of physical, conceptual and logical data structures. The platform can generate SQL scripts, thus replacing the need to write them manually. It has built-in collaboration and sharing tools and supports both forward and reverse database engineering.
Pros
Cons
Provides a logical data modeling framework for large databases, including those deriving from Hadoop.
HeidiSQLA free data modeling tool that offers sufficient features and capabilities for most organizations. It supports MySQL, Microsoft SQL, PostgreSQL and MariaDB.
IBM InfoSphere Data ArchitectDelivers a sophisticated data modeling tool for aligning services, processes, applications and data.
Oracle SQL Developer Data ModelerSupports data modeling for physical database architectures within Oracle environments.
SQL Database ModelerImports and builds SQL scripts used for modeling. Offers strong collaboration and sharing features.
Data Modeling Tools: Vendor Comparison Chart for Top SolutionsProduct
Pros
Cons
Archi
Open source with strong cross-platform support. Highly flexible.
Interface is somewhat unintuitive. Performs slowly under complex and heavy workloads.
DbSchema
Intuitive interface. Powerful features with strong reverse-engineering capabilities.
Steep learning curve. Documentation and support are sometimes lacking.
DeZign for Databases
Supports a wide range of data formats. Flexible and highly customizable.
Best suited to technical experts. Somewhat dated interface.
erwin Data Modeler
Excellent model templates. Supports large and complex data models.
Steep learning curve. Dated interface.
Idera ER/Studio
Supports numerous data formats and frameworks. Strong collaboration.
Steep learning curve. Somewhat unintuitive interface.
Lucidchart
Highly rated interface. Extensive library of drag-and-drop templates and libraries.
Steep learning curve. Formatting and data modeling is not as advanced as competitors.
MapBusinessOnline
Powerful features. Connects to numerous data sources. Strong reporting and visualization.
More narrow data focus than competitors. Tutorials and supports resources are at times lacking.
Navicat Data Modeler
Intuitive interface. Multi-platform support.
Lacks collaboration. Pricy with advanced features available only on premium versions.
Toad Data Modeler
Strong support for numerous data types. Excellent visualization capabilities.
Steep learning curve. Pricing can be complex.
Vertabelo
Strong collaboration and sharing features. Free version offers useful features.
Not widely used. Graphics and visuals sometimes lag competitors.
The post Best Data Modeling Tools and Software for 2022 appeared first on eWEEK.
Modern marketers are presented with a vast amount of data that requires careful analysis to identify new trends and offer the best customer service. The data could be from within your organization, the market, or competitors. Insight from this data guides businesses in making key decisions – and gaining competitive advantage.
But sifting through this sea of unorganized data can be an overwhelming task. You could easily miss obvious patterns or possible errors and end up with a flawed analysis.
To help, data visualization can convert complex data into a visual format for a quick grasp and analysis. Clearer presentation of information helps you make accurate decisions and better marketing plans.
Also see: Top Data Visualization Tools
What Is Data Visualization?Data visualization is the process of presenting data through a chart, graph, or any other form of visual context. You can visualize the data as a whole or just the relevant sections that you want to focus on at the moment. This allows staffer – especially non-tech staff – to understand and analyze the data better than with plain text or numbers in a table.
In the workplace, data visualization enables you to absorb complex sets of data and derive key insights quickly. From this analysis, you can create meaningful marketing campaigns and make informed business decisions.
Also see: What is Data Analytics?
The Use of Data Visualization in MarketingMarketing teams can utilize data visualization in almost all phases of outreach, including: processing data that has been collected, creating a marketing strategy, and analyzing the performance of the marketing strategy.
Here are five ways marketers use data visualization to their advantage:
1. Conversion of Complex Data into Digestible FormatData in a spreadsheet is tiring to the eyes and brain because you need to be fully attentive to identify the important information. A few seconds of absent-mindedness and you miss a key figure that could identify a trend or an outlier.
Once the same information is presented as a graph, it’s easier to interpret the data and identify patterns and connections between various factors.
Marketers can use this feature to convert the raw data collected from the market to a format that’s simple to understand and analyze. It’s easier to analyze customer preference and company performance that way. This insight enables you to collaboratively create new effective marketing strategies and improve old ones.
Also see: Top Data Mining Techniques
2. Creating Customer ProfilesModern marketing demands understanding your customers’ preferences to customize your services and products. If you don’t cater to their tastes, your competitor will. This use of digital tools to cater to buyers is a core element of digital transformation.
Successful marketing teams need to create profiles of their customers. The profile can be categorized by traditional demographics such as age, gender, and location. You can create other modernized categories, for example, based on the social media platform they use or the channel, link, or advertisement that led them to your site etc.
Visualizing the data on all these demographics will help you create accurate profiles of your customers. It will give insight on how different customer groups react to specific marketing strategies, profiles that are more likely to be returning customers, and most popular products among specific groups.
This information guides the marketing team through creating personalized services for the customer. It’s also important for effective, data-driven targeted ads optimized for specific target customers.
3. Analyzing Marketing StrategiesAfter creating and implementing marketing strategies, you need to analyze what worked and what didn’t. At the start of any marketing campaign, you should have specific and measurable goals. These will serve as your basis for post-marketing analyses.
Unfortunately, raw data can be misleading during such analysis. A certain advertisement may appear to be successful in generating leads to your site. But once the data is visualized in a graphic that maps the customer journey to the end, you realize that most of the leads from that channel don’t convert to purchasing customers.
Also see: Top Data Analytics Tools
4. Assess and Improve the Performance of Various PlatformsMost marketing today is done digitally via social media posts, blogs, and websites. But – surprise – marketing platforms such as mass and printed media are still popular and effective. Clearly, you cannot allocate equal resources for all of these platforms. Instead, identify the most effective platform for your product, based on the nature of your target audience.
You can do this by assessing the sales results attributable to each platform versus the resources used to attain the results. Visualizing the data (e.g., in graphs) will highlight the platforms with the best return on investment (ROI). Even among similar platforms such as social media, zero in on specific sites and assess their performance. The poor performers can be discarded while those with better results are allocated more resources.
5. Improve Website ConversionsA user’s experience on your website determines the type and length of interaction they will have with your site and product. A site that is difficult to navigate, slow to load, or an inconvenience in any other way will drive customers away before purchasing or subscribing. A site with a high bounce rate loses out on revenue and also ranks low in search engine results.
That’s why you need to assess the users’ journey on your website, identify the pain points, and eliminate them to improve the customer experience. Visualized data will help you identify whether pertinent issues are coming from the website itself.
There are also external factors you can work on. For example, your choice of a web host could affect the loading speed of your site. If you realize this is the case, change to a more reliable alternative to stop losing potential customers.
Such assessments and improvements on the site ensure your conversion rates keep rising – but again, you must monitor this constantly.
Also see: Best Data Mining Tools
Conclusion: Always Keep AnalyzingModern businesses are run on data from within their systems, from competitors’ sites, the market, and customers. Every business is, therefore, allocating resources to collect data to outdo their competitors, and so should you. But the data is unhelpful if you can’t analyze it properly or get accurate insight to guide your decisions.
Data visualization makes it possible for you to make sense of the data and also explain it to your team and stakeholders – easily and with visual support.
About the Author:
Jerry Low is the founder of WebRevenue.
The post Data Visualization in Marketing: 5 Key Steps appeared first on eWEEK.
Regional electric cooperatives and service providers have long played a vital role in supplying critical power and connectivity to rural and remote communities. Currently, 260 telephone and 834 electric cooperatives serve much of rural America, which includes just 14 percent of the population, but 72 percent of the land area.
Now these cooperatives and their utility members, along with scores of other wireless, wireline and internet service providers (ISPs), are poised to play a crucial role in connecting the remaining unserved communities.
This is a unique opportunity to do far more than just enable unserved communities to catch up to the high-speed broadband networks taken for granted in densely populated urban areas. By fully leveraging newly passed government funding and recent digital adoption gains, electric cooperatives and other regional ISPs can help these communities leap ahead in digital adoption.
However, leaping ahead will require a focus on the network end-to-end, not just on the critical last-mile access. It’s also critical to support core network technologies and systems. Utilities and other regional ISPs must strengthen the overall digital resiliency and security of their network, while meeting rising subscriber expectations. A more comprehensive approach will result in new subscribers served by a network that is fully carrier-grade, end-to-end.
Also see: American Connection Project Aims to Fill the Gaps in Rural Broadband
Rural Demographics Will Change with Work-from-HomeThe move towards work-from-home appears to be a permanent shift. In a recent survey of 1,200 CSP IT professionals, 67 percent expect that their business subscribers will continue to allow employees to work from home.
Also, once equipped with quality connectivity, the currently unconnected communities could potentially become havens for tech workers increasingly seeking the advantages of a rural area. This would also offer the same advantages to existing residents, specifically, access to global job markets and new economic opportunities.
In sum, this could boost today’s move toward digital transformation by providing a technical infrastructure that enables a larger user cohort to contribute economically.
Cybercriminals See Opportunity in Rural BroadbandThreat actors are eyeing the broadband expansion with a different view of opportunity: up to 42 million new victims in remote, less secured areas are soon to be accessible with a high-speed path for malicious activities.
Rural locations (hospitals, schools, and banks) are more vulnerable in many respects to cyberattacks, including DDoS attacks, simply because they are more critical to local communities, yet are not monitored well due to fewer security resources or the remote location. These neglected communities also often include higher percentages of elderly residents who are often less tech-savvy and have not experienced years of cybersecurity awareness.
Electric cooperatives and other regional ISPs should expect subscribers to voice more concerns and request better network security than in the past. As the U.S. government now pours tens of billions of dollars into Internet infrastructure, cyber defense must be a top priority in network plans for rural broadband expansion.
Also see: Cisco’s Rural Broadband Innovation Center: Leveling the Playing Field
Continued Rural Buildout: Low Latency, CybersecurityUtilities, electric cooperatives, and other regional ISPs that are undertaking the buildout to new rural communities need to acknowledge how much the landscape has changed in the last two years as they plan their expansion.
Speed, while important, is not the only design parameter that network operators need to consider as they design and build networks to new areas. Popular applications such as video conferencing require lower latency, higher uplink speeds and higher connections per second.
Moreover, the threat landscape is evolving, and actors are increasingly adept at launching a wide range of cyber-attacks. Operators must double down on basic cyber security hygiene, continually adapt to a changing threat landscape and upgrade network security infrastructure with automated defense mechanisms, such as granular DDoS detection and mitigation. Subscribers, both consumer and business, will expect higher levels of network security and availability. As such, security must take a higher priority in network investments.
Massive amounts of funding are now focused on bridging the digital divide for rural communities. If successful, long-standing inequities between rural and urban communities, tribal nations and other unserved or underserved areas may, over the next few years, finally be closed.
Electric cooperatives, the big winners in the 2020 RDOF auctions, should build out with an eye towards future services that will, enable their “unconnected” communities to compete on a global market and leap forward to new opportunities.
About the Author:
Terry Young, Director, 5G and Service Provider Solutions, A10 Networks
The post Rural ISPs Boost Digital Transformation in Underserved Communities appeared first on eWEEK.
With increased literacy and understanding – and increasingly publicized failures – the curtain is falling on the public’s perception of artificial intelligence as a mysterious, unmanageable or independently evolving intelligence. We begin a new episode, in which we – as individuals and corporate entities – firmly acknowledge and embrace our inalienable role as the architects of AI’s future.
Embracing that agency, the human factor will become central to how and where enterprises think about deploying AI. In that vein, here are four human-centric predictions for what to expect for AI in 2022.
Also see: Top AI Software
1) Human Experience Takes the FieldAs AI capabilities inexorably evolve, organizations continue to rethink employee and customer engagement. As a result, a focus on human-centric design or human experience (HX) will gain ground in all domains.
HX incorporates traditional elements of decision intelligence, CX and UX practices with design thinking. However, HX further expands the field of view to encompass both the human (individual) and humanity (society at large).
Informed by lessons learned in the digital realm and myriad high-profile AI failures, an HX-orientation also – as artfully articulated by strategist Kate O’Neill – requires consideration of not just potential failures but also of wild success. Thus moving strategic tools such as scenario planning beyond the boardroom.
The skeptical will point to HX as a rebranding of existing concepts. They are right, in part. However, this interpretation misses the mindset shift and complexity inherent in bringing these disciplines together to effect long-term transformational change. One in which technology is not deployed for use by or on behalf of passive recipients (aka users and clients, respectively) but for humans. Humans, who individually and collectively, are actors in their own right, with all the agency and engagement that entails.
Also see: What is Machine Learning?
2) Renewed Respect for the Knowledge WorkerWhile AI will continue to be deployed to identify patterns and to surface previously unknown connections, the need for knowledgeable humans to make true sense of the algorithm’s outputs will be acknowledged. There will be a heightened appreciation for the power of AI, but more importantly the very real limitations of AI algorithms – best demonstrated by recent adventures with GPT-3.
This will direct organizations away from using AI to automate knowledge work in favor of using AI to expand the knowledge available to the expert. As such, automation of tasks with a high degree of variability or contextual nuance will shift toward supplementation. Examples include healthcare practitioners to paralegals to production line engineers. Less context-dependent, invariable and rote tasks will continue to be automated at pace.
3) Collaboration and Continuous learning – For People and AI AlgorithmsIn a related but separate trend, improved AI literacy across the enterprise will result in the continued evolution toward collaborative, multi-disciplinary teaming models. This will result in an increased focus on identifying potential pitfalls, instituting operational guardrails and test-driving AI-enabled processes in parallel with existing processes.
This will go a long way toward establishing realistic expectations by allowing AI models – and the humans wielding them – to learn and improve over time. As opposed to today, in which decision makers often expect AI to outperform existing systems (human or analytical) with minimal flaws straight out of the training gate.
This will result in more robust, resilient and responsible AI solutions. It will also, perversely and positively, result in more intelligent decisions about when these solutions don’t make the grade.
4) Ethical AI: A Pregnant PauseAs the legal and compliance environment heats up, AI ethics initiatives may experience a pregnant pause as organizations assess the direction, weight and viability of emerging regulations.
During this time, entities without discrete responsible AI programs already in place will take a pragmatic approach, exercising existing governance and risk management practices as a first line of defense. These may include data governance, data quality, cybersecurity, risk management and audit practices. Explicit tactics will be influenced by industry and level maturity. These will leverage established bioethics and safety assessments in healthcare, model auditing and compliance in financial services, and safety engineering practices in manufacturing.
On the human front, organizations will begin to balance risk and compliance-led approaches with rights-based responses and corporate responsibility initiatives aligned with emerging ESG priorities.
Also see: AI and Ethics: Experts Speak about Challenges, Possible Directions
About the Author:
Kimberly Nevala is the Strategic Advisor for AI at SAS
The post Humans Will Be the Story of AI in 2022: 4 Predictions appeared first on eWEEK.
NTT (Nippon Telegraph and Telephone Corporation) is one of the world’s leading telecom providers. The conglomerate is deeply rooted in the mobile space, providing cellular services across Japan through its subsidiary NTT Docomo.
More recently, NTT has shifted its focus to building innovative solutions that utilize next-gen mobile technologies to help organizations deploy networks specifically customized for their business.
In August, NTT partnered with startup Celona to launch the first globally available private 5G network-as-a-service platform (NTT P5G), leveraging capabilities across NTT’s various subsidiaries. Celona developed the P5G technology powering NTT’s platform, which can be deployed via cloud, on-premises, or at the edge as a subscription-based service.
This solution is targeted at organizations that want a single private 5G network deployed across the enterprise, with visibility and administration controlled from a single, self-service portal.
In my latest ZKast video, I interviewed Shahid Ahmed, group executive VP of new ventures and innovation at NTT, to discuss the new partnership with Celona and why enterprises should be looking into private 5G right now. Highlights of the ZKast interview, done in conjunction with eWEEK eSPEAKS, are below.
Also see: How to Prepare for 5G in the Enterprise
Also: Network Industry Shows Strength Despite Tough Macro Issues
The post NTT Addresses the Why and When of Private 5G appeared first on eWEEK.
I spoke with David Linthicum, Chief Cloud Strategy Officer at Deloitte Consulting, about his advice for handling the key pain points that companies face with multicloud.
Among the topics we covered:
Listen to the podcast:
Watch the video:
The post Deloitte’s David Linthicum on Optimizing Your Multicloud appeared first on eWEEK.
Understanding user experience has been somewhat of a “Holy Grail” for the IT industry for decades. Many vendors have tried, yet none have managed to crack the code.
Think back to names like Micromuse, Computer Associates, Riverbed and Netscout, which all tried to take their underlying management tool and adapt it to understand what a user is experiencing. There is obviously value in it, as it can help IT manage users better.
Legacy IT Models: Based on Reactive ManagementWith a traditional helpdesk, about 75% of trouble tickets for application problems are created by end users, putting IT in a tough position – they are constantly reacting to things. A true understanding of user experience would help IT be proactive and even fix problems before user calls.
To achieve this, a typical enterprise deploys up to a dozen monitoring tools. This is not only costly but also inefficient.
Cloud-based information security provider Zscaler is taking a different approach. Its goal is to combine multiple monitoring tools into a broader digital experience monitoring service, called Zscaler Digital Experience (ZDX).
The subscription-based service is delivered on Zscaler’s cloud-native Zero Trust Exchange platform, so it’s completely transparent to the user.
Understanding User Experience With Insights from Security CloudZDX leverages insights gathered through the Zero Trust Exchange, which acts like an intelligent switchboard to connect users, apps, and devices over any network. ZDX baselines the user experience regardless of the app and then looks for deviations.
If there’s a deviation on ZDX, it uses a scoring principle (the “ZDX score”) to rate the severity of the problem. By understanding where the problems are and why they’re happening, organizations can resolve them faster.
In November, Zscaler made several enhancements to ZDX and added comprehensive monitoring for unified communications as a service (UCaaS) apps. I recently chatted with Dhawal Sharma, vice president of product management at Zscaler, to better understand how ZDX works, the new enhancements, and how it fits into the overall Zscaler platform. Highlights of my ZKast interview, done in conjunction with eWEEK eSPEAKS, are below.
Also see: AIOps Provides a Path to Fully Autonomous Networks
The post Zscaler Brings a New Approach to Experience Management appeared first on eWEEK.
As enterprises have increasingly chosen multi-cloud deployments over single vendor engagements, the need to define and address complexity has never been greater. Dell Technologies’ new APEX services and DevOps capabilities for multi-cloud applications offer a prime example of how vendors can proceed.
Dell expressed two goals in launching APEX as-a-Service solutions in May 2021: to simplify how customers consumed and managed IT assets and services, and to ease their access to and control of cloud compute resources.
In essence, the company was working to ensure that organizations could deploy and use compute, storage and other IT resources, whether on- or off-premises, as easily as they did public clouds. At the same time, Dell recognized the importance and value of making sure that customers’ IT assets – wherever they were located – delivered the same consistency and security as those in on-premises data centers.
Since then, Dell has steadily added new multi-cloud solutions to the APEX portfolio, including close integration with VMware Cloud solutions, enterprise-ready data storage and protection features, and collaborations with all major public cloud vendors.
Also see: Top Cloud Service Providers & Companies
Dell’s new APEX and DevOps offeringsThe company’s new multi-cloud services reflect this approach for distinct cloud-focused business audiences:
In addition, Dell announced a new cloud storage initiative, Project Alpine, which will bring the software IP of its flagship block and file storage platforms to leading public clouds. As a result, customers will be able to purchase Dell storage software-as-a-service using existing cloud credits, and simplify managing and sharing storage in on-premises facilities and across multiple public clouds.
Also see: Why Multicloud is Now Fully Mainstream
Expanded Access and Dell partnersThe availability of Dell’s APEX Data Storage Services is expanding to 13 countries across Europe and Asia and is also available with Equinix colocation services in the United States, United Kingdom, France, Germany and Australia. APEX Cloud Services with VMware Cloud is now available in the United States, United Kingdom, France and Germany.
It is also worth noting that while APEX services are available directly through Dell, the company has also designed the APEX portfolio for channel partners’ value-added offerings and customer engagements. Systems integrators with established practices around popular public cloud platforms, including AWS, Azure and Google Cloud are potential beneficiaries.
In addition, partners with storage and data protection businesses that desire to expand into cloud-based solutions can use Dell APEX as a foundation. Finally, cloud service provider (CSP) partners that are building or hosting Storage-as-a-Service and cloud solutions can use APEX Data Storage Services and APEX Cloud Services with VMware Cloud to simplify solution integration and operations. They can do this while accelerating customer time-to-value and also creating opportunities to layer on their own specialty services.
Final Analysis: An Agnostic Cloud EnablerWhat’s the bottom line on these new services and offerings? From the beginning of its work on cloud computing, Dell Technologies focused on demystifying involved technologies while addressing the needs of its cloud-bound businesses.
Firmly positioning itself as an agnostic cloud enabler – rather than becoming tightly tied to a small handful of large vendors – has placed Dell in a prime spot for developing and supporting multi-cloud solutions, including its APEX-aaS offerings.
The new APEX Multi-Cloud Data Services and Backup Services extend Dell’s existing solution sets, as well as its leadership position in enterprise storage markets. The new DevOps-ready platforms and refreshed portal highlights the company’s support for the increasing number of businesses planning or deploying modern and cloud-native applications and processes.
Finally, Dell’s Project Alpine offers insights into the company’s future, and how it will continue to smooth customers’ journeys to multi-cloud environments and help ensure the consistency and security of their experiences.
Also see: Multicloud Best Practices
The post Dell APEX: Increasing Consistency in Multi-Cloud Deployments appeared first on eWEEK.
I spoke with Rajiv Ramaswami, President and CEO of Nutanix, about key trends and issues in multicloud computing, including the future of this emerging technology.
Among the topics we addressed:
Listen to the podcast:
Watch the video:
The post Nutanix CEO Rajiv Ramaswami on the Future of Multicloud appeared first on eWEEK.
One-off sustainability initiatives and short-term goals just don’t make the cut in today’s ESG-focused market. Consumers now demand greater transparency into business operations, supply chains, and social impacts. Additionally, they’re on the lookout for organizations making investments into the circular economy; that is, the infrastructure dedicated to recycle and reuse.
In fact, a survey found that 66 percent of US consumers believe it is an organization’s responsibility to demonstrate their ESG performance. It’s important to remember that some of these consumers are C-level executives at your customer and prospect companies.
But it’s not just consumers out on the hunt; investors are also insisting on better ESG policies from prospective organizations. Stakeholders want to see detailed plans on how organizations will commit to sustainability initiatives – and more crucially, how they intend to measure and deliver on these.
Also see: Digital Transformation: Definition, Types & Strategy
The Race to Net Zero Has BegunIn the wake of the pandemic, organizations with flexible and transparent ESG strategies have been able to pivot with the changing landscape. However, remaining agile means demonstrating tangible progress, not just written plans. Research suggests that it is essential we achieve net zero emissions by 2050. While many organizations have vowed to make decarbonization a priority, many have yet to take action to reach the net zero goal.
The Race to Zero is a global campaign designed to gain support and action from businesses, leaders, and investors for a zero-carbon economy. This initiative was spotlighted at the UN Climate Change Conference of the Parties (COP26), an event designed to boost real change for sustainability.
Understand What ESG Means to Your CustomersActions speak louder than words, and it’s up to organizations to prove that they are doing what they’ve stated they will. There is no ESG strategy that will suit all organizations across the board. So businesses need to ensure that their unique sustainability plans not only accommodate what they believe in, but what their customers believe in, too—which can vary from region to region.
Research from GreenBiz highlights the different focus areas each region worldwide associates most with sustainability. For example, in North America, consumers associate sustainability most with recycling. Therefore, investment into the circular economy will be vital to align strategies with local sentiment and ensure the actions of the organization resonate well with the domestic audience.
Also see: Top Digital Transformation Companies
Tech’s Pivotal RoleWaste and pollution reduction, product and material circulation, and nature regeneration are the premise of a circular economy model, and they are difficult to achieve without technological investment.
Climate tech solutions come in many forms, including Application Programming Interfaces (APIs), Internet of Things (IoT), cloud computing, and Software as a Service (SaaS). Investing in these technologies and harnessing their capabilities, such as enhanced product flow and traceability, has enabled organizations to increase waste management control and raise accountability.
As a result, an increasing number of investors recognize the need for climate tech. The market for climate related technology has been thriving; in the first 6 months of 2021, $14.2 billion was invested into climate tech worldwide—this is 88% of the total investments made in 2020, according to research from Pitchbook.
While there are still some businesses who have not yet adopted such a “track, trace, and think” methodology, future-focused companies have already recognized and capitalized on the supply chain traceability, company profitability, and many other benefits that are provided by disruptive technologies and a circular economy.
Everyone on Board – Including the BoardHistorically, executives that empower the rest of the organization to be proactive and strategic have enjoyed the most employee engagement and have been the most successful in making an impact when it comes to sustainability. Creating a corporate-level sustainability agenda will keep the whole organization on track to reaching their sustainability goals.
Corporate goals and objectives will be pivotal in establishing an effective ESG strategy, but so will the efforts of everyone else in the business. Incorporating a sustainable culture into the organization – and taking time to celebrate and acknowledge the wins – will ensure everyone is motivated to play their own role in making a difference. Ultimately, to succeed in meeting ESG goals, everyone must be on board to collaborate across their business ecosystem—from stakeholders to customers to partners.
The move toward better sustainability practices is more urgent than ever. That’s why events such as COP26 are pivotal in raising awareness, not only within the hearts and souls of people, but within businesses, nations, and leaders around the world. The time truly is ticking by, yet the transition is doable, and the technology is already there to lend a digital hand to those that wish to pledge their commitment.
Also see: Predicting Cloud Trends for 2022: Migration, Multicloud and ESG
About the Author:
Cindy Jaudon is President of IFS North America
The post The Urgent Need to Align Business Strategy with ESG appeared first on eWEEK.
Multicloud is one of those topics that most cloud providers would rather not discuss. Why? Well, you’re admitting that their cloud is not the only destination for your applications and data. Also, and most important, they will be forced to work and play well with other cloud providers and do so as a stated policy.
Sadly, if you’re moving down a multicloud path, which most enterprises are these days, you’re mostly on your own in terms of how you put together an optimized multicloud architecture. It’s no real surprise that a lot of companies are making a lot of mistakes right now, and this is leading to multicloud failures that cost enterprises millions of dollars.
The Flexera 2021 State of the Cloud report states that 92% of enterprises have a multicloud strategy. This ranges from those who have begun their journey to those who are just entering the planning stages. So you’re not alone in your struggle: Managing complexity across different cloud providers is the core battle that enterprises are now facing.
Managing complexity across different cloud providers is the core battle that enterprises are now facing.
Why We MulticloudFor those of you who would consider leveraging a single cloud deployment to overcome the challenges of multicloud, you are effectively losing the core business benefits of leveraging a multicloud deployment. These benefits include:
The ability to select best-of-breed technologyAWS may have the best data analysis system for your inventory management system. However, Google may have a better AI platform for your needs, and Microsoft may have a SaaS system that you want to utilize.
We all want to find an optimized solution that leverages the best technology to fit the specific needs of the business. A multicloud deployment typically offers the lowest cost and most business-optimized solution—at least, conceptually.
Also see: AWS vs. Azure vs. Google Cloud
Opportunities for operational cost efficacyAll public clouds have different pricing options. Either the price is higher or lower, or more likely, the way they bill for services may favor some use cases but not others.
Take data ingress and egress, for example. Cloud providers are all over the place on what they charge to simply bring data into your cloud-based data stores or transmit the data out of the cloud. The way they bill could be for data sent and received, the bandwidth used, or the time expended.
This means your configuration will have advantages with some clouds, but not with others. Given that this could be a bill that moves to well over $500k per year and is likely to increase as the business grows, these are major considerations. Therefore, it is important to consult with your cloud provider for specific policies and pricing.
Pricing analysis then moves on to storage pricing and policies, such as leveraging reserved instances to save some money. Other areas of pricing you should consider include how compute is charged and special services such as artificial intelligence and analytical systems. All are different, and all should be considered along with the business requirements. Then, consider again which cloud is the most economical to meet a specific business need.
Removing single vendor dependencyMany point to multicloud as being a way to avoid vendor lock-in. While using a single vendor can certainly have its advantages, if you write your applications to leverage whatever native cloud API is provided by only one of your cloud providers, you’re pretty much locked in. Multicloud does little to solve this problem other than allowing you to get locked into more than a single vendor, but the limitations are much the same, if not worse.
if you write your applications to leverage whatever native cloud API is provided by only one of your cloud providers, you’re pretty much locked in.
On the other hand, there is a notion that having the ability to leverage more than one vendor provides a bit of operational and business leverage. For example, the ability to have a pre-established relationship with more than one public cloud provider allows you to make better choices around the use of core cloud services, such as storage and compute. You even have options available if one of the vendors does not provide good terms or good service.
The same can be said around operational issues. For example, let’s say your primary cloud storage provider is not living up to SLAs (service-level agreements). With a multicloud deployment, you can easily move to another place, which can reduce both risk and cost.
Also see: Private Clouds Remain Central in a Multicloud World
Emerging Multicloud Best PracticesWhile multicloud deployments are relatively new, enterprises are already beginning to gather some best practices regarding multicloud solutions. These are best practices that most cloud providers would rather you not know about or leverage.
It’s not in cloud vendors’ best interests to spend billions of dollars to push enterprises to other vendors’ clouds. Their dollars get spent attracting enterprises to their cloud and no other. Thus, this is becoming a path to cloud where enterprises are largely on their own.
So, when it comes to cloud computing, these emerging best practices are about acting and thinking more independently than you have in the past.
And more important, you must understand that cloud is not a “one size fits all” type of technology deployment. A cloud deployment needs to be a decision that considers all aspects of your business and uses best practices to find the most optimized multicloud solution. You must weigh cost efficiencies and complexities and the cloud platform’s ability to meet the needs of your existing and future business.
Here are some of the best practices that most often drive multicloud success:
1. Consider the resulting complexity of your multicloud deploymentThere are a number of technologies deployed across a multicloud deployment, and all need to be understood regarding function, correct integration, and long-term operations and support.
With cloud services such as storage, compute, AI, serverless deployment and more, it can be difficult to figure out how to configure and leverage these services optimally for your business.
With a single cloud deployment, you limit the number of technologies you can leverage, but most of them are purpose-built to work well together. With a multicloud, multiply the number of clouds you leverage by a multiplier of the different technologies available. For example, a single cloud might have 5 different native storage solutions (e.g., block, object, file, etc.), but you could have as many as 30 when you leverage three public clouds as part of your multicloud.
a single cloud might have 5 different native storage solutions (e.g., block, object, file, etc.), but you could have as many as 30 when you leverage three public clouds as part of your multicloud.
Let’s say you leverage 10 of the 30 possible storage solutions, for use with different applications and data stores. Don’t forget that you’ll need to hire and train for the additional technologies, as well as operate them using CloudOps staffers who understand how each works, as well as expertise for more specialized ops tooling. You can count on spending as much as twice or three times the budget for operations than when deploying on a single cloud provider.
Most enterprises won’t spend that kind of money to deal with multicloud complexity, which means complexity must be mediated during the design phase of your multicloud. You have to plan ahead for operations.
You can also mediate much of the complexity by using advanced operational tools, such as AIOps, where you can leverage abstraction and automation as ways to deal with complex operations with fewer humans. While this can lead to a reduced cost compared to traditional methods of handling complexity, operational tools like AIOps typically operate best as a built-in feature and will cost much more if integrated later.
Also see: The Future of CloudOps: Big Challenges and Possible Solutions
2. Consider multicloud cost governanceJust as too much architectural complexity leads to cost overages, not having a solid grip on the consumption of cloud services across a multicloud deployment will lead to cloud bills that are higher than any benefits multicloud can provide.
Cost governance has other names, such as FinOps, which basically leverages cost monitoring technology to monitor what cloud services are being consumed by what or who as well as how much they cost over time. The trouble is that many who deploy multicloud often rely on the cloud native cost monitoring tools from each provider. While these are fine when you leverage a single cloud solution, they are overly complex to deal with if you leverage two or more clouds – and often lead to costly mistakes.
The trouble is that many who deploy multicloud often rely on the cloud native cost monitoring tools from each provider.
The new generations of cloud governance tools should have the ability to monitor the ongoing usage of cloud resources as well as set limits on usage, provide charge-back and show-backs to budget per organization. They should provide deep data analytics to determine how the spending will likely change over time. This includes what-if analysis to consider the cost of leveraging different cloud services, including more complex usage billing, such as ingress and egress charges, and leveraging discounted services that are purchased ahead of need.
3. Consider what’s between the clouds rather than what’s in themWhat’s important about multicloud is not what runs within each cloud, at least conceptually; it’s about what runs between them. This includes all common services that should span the entirety of the multicloud, including security, governance, operations, etc. This includes all services that should be the same between clouds and that will operate as “cloud agnostic” or “cross-cloud” functioning technology.
This can be a bit confusing because these technologies may operate within a specific cloud provider as a third-party application. However, they are purpose-built to deal with all the clouds within your multicloud deployment as if they were functionally operating between them on an independent platform.
These third-party technologies that can span clouds may include cross-cloud security managers that leverage a common identity directory, cost governance as covered above, and operational tools such as AIOps that span all clouds. And any other services that are common to all clouds and should run above all clouds rather than as a specific tool native to a single cloud.
Also see: AIOps Trends
Learning Multicloud by Trial and ErrorCore to the best practices presented here is the ability for enterprises to create their own path to multicloud. Yes, it would be nice if there were a single solution pattern that fit all enterprises when it comes to the use of multicloud deployments. In the real world, each architecture needs to be purpose-built for the enterprise that leverages it.
The good news? Some common best practices are beginning to appear. It’s important that you learn as much as you can from the trials and errors of those who came before you, and then, come up with your own specific solutions. Those who follow the best practices mentioned in this article are more likely to find multicloud success.
The post Best Practices for Multicloud (that Cloud Providers Prefer You Not Know) appeared first on eWEEK.
5G is taking the world by storm due to its remarkable speed and bandwidth strength. It supersedes 3G, which gained popularity in the early 2000s after the iPhone 3G was released, followed by 4G in 2011. 4G and 4G LTE are still heavily relied upon, but 5G is becoming more available for enterprises, small businesses and everyday consumer use.
Although 5G is hailed for its low power consumption and increased interconnectivity, adopting a 5G network does come with some degree of risk. Businesses should be aware of the potential increased risk and cybersecurity concerns that the 5G network may introduce.
Also see: Cybersecurity Risks of 5G: How to Control Them
Faster Networks Provide Ideal Attack ConditionsThe perfect scenario for a hacker is a vulnerable device connected to a low latency, high-speed network because it responds faster, thus improving the speed at which security weaknesses are identified and exploited. Traditionally, this type of threat was limited primarily to devices connected via terrestrial networks. The new generation of 5G connectivity aims to provide a similar latency and throughput speed as fiber optic internet connectivity.
While 5G itself is not a security threat, a device connected to a 5G network is not offered additional security. In fact, the higher throughput speed offered by 5G acts as a double-edged sword.
5G is theoretically 10 to 100 times faster than its 4G predecessor, meaning any 5G connected device will be far more appealing to a cybercriminal in terms of an attack target. Cybercriminals want quick wins; therefore any device that responds rapidly is a more attractive target over a device that originates from a high latency, low throughput network.
On systems where a large cache of data exists, the ideal situation for an attacker is to find it connected to a high bandwidth network capable of significant data throughput, enabling data exfiltration to occur faster. The alternative method more frequently used by attackers is to upload ransomware onto the target, which then proceeds to encrypt all valuable data in a manner that enables the attacker to extort a ransom in return for a decryption password.
More Bandwidth Means More Vulnerable Devices and DataAs IoT technology and edge computing continues to spread exponentially into both consumer and industrial devices, their dependency on agile connectivity will also increase. 5G offers increased speed and greater concurrency making it possible for a larger scale of critical IoT devices to stay connected at one time.
With 5G enabling heavier loads of devices to be connected, it naturally leads to higher volumes of data being transmitted, shared, and potentially compromised through undiscovered device vulnerabilities. Where such devices are involved in the collection and processing of personal information, any subsequent security breach may result in the exposure of health care files, banking transactions and sensitive customer data.
Consumer privacy will be an ongoing concern for businesses. Staying aligned with global data privacy laws allows businesses to avoid increasing financial penalties issued by regulators. Moreover, protecting sensitive information bolsters consumer trust and creates a strong, long term customer relationship that enables organizations to use data privacy as a competitive advantage.
A Future Enabled by 5G: Risk vs. Reward5G offers near-instantaneous communications for current and next-generation devices such as smart cars, drones and endless other applications that drive our society towards a modern future.
While this new level of connectivity may introduce additional risks, it also offers substantially more rewards where risks can be managed with proactive mitigation and awareness of the data on every connected device. Ultimately, the purpose of an attacker looking for devices connected to high-speed, low latency networks is to find weaknesses to steal or leverage the valuable data within these devices.
The world’s thirst for high connectivity will never stop and as the world migrates towards 5G technology, the next conversation will migrate towards what the next technology successor will be. However, the associated security risk themes will always remain similar in that these risks exist and require controls to mitigate them.
Guard Your Data Against 5G’s ThreatDespite whether your organization’s computing devices are in the field connected via 5G, via a local area corporate network or if they exist remotely within an employee’s home office environment, the approach to tracking and mitigating associated risk remains the same. Regardless of your industry, a fundamental approach to tracking an organization’s risk from 5G starts with the data being stored, transmitted or processed.
Establishing the types of data being handled and where that data resides, whether within on-premise devices, cloud providers, or remote employee laptops provides a key baseline to data risk awareness.
A standard technology utilized to establish awareness of data within modern security-aware organizations is data discovery. This technology crawls across every type of data within servers, laptops, cloud and email systems looking for every hidden instance of personal, sensitive and confidential data.
Irrespective of your chosen technology approach, the key principle is establishing an awareness of data, including where it is, what it is, and establishing an ongoing plan to regularly review and remediate any new findings.
About the Author:
Stephen Cavey, Co-founder and Chief Evangelist, Ground Labs
The post An Overlooked Cybersecurity Threat: 5G appeared first on eWEEK.
If we track the recent progress of the Chief Information Security Officer (CISO), there’s good reason to wonder if they are headed toward the visibility once reserved for CEOs, given how today’s dramatic security challenges have boosted their profile.
In a relatively short time, we’ve seen cybersecurity move from being an afterthought to become central to business operations. It really wasn’t until the very end of the millennium when the Melissa virus, coupled with the fear of Y2K disasters, launched “hacking” and data security into the public perception.
Since that time—a mere 20 years ago—we’ve seen a rapid evolution of the role of the CISO from a back-office controls and risk mitigation function to one of the most influential voices in the boardroom. CISOs are responsible for guarding against attacks that are not only costly in terms of revenue but also brand reputation.
In an era of rapid digital transformation, the role of the CISO has shifted to that of an “enabler,” helping companies securely move at the speed of the market. It’s not a stretch to assume that as the significance of the role continues to increase, so too will the public interest in the people holding these roles.
In fact, we’re already starting to see this shift as CISOs are increasingly being called upon to serve as thought leaders and experts in the eyes of external stakeholders.
Taking Center Stage: A Challenging BalanceJust as many brands have benefitted from the robust personalities of their CEOs, there is a corresponding argument to be made that putting the CISO front and center can be beneficial.
Data security remains a polarizing topic. According to a recent survey from KPMG, 67% of the U.S. general population say they want more transparency around how their personal data is being used by companies. And 40% say they would willingly share their personal data if they knew exactly how it would be used—and by whom.
Similarly, in a “show, don’t tell” era, consumers may place more trust in an organization if they feel they know the person behind ensuring their data-safety. Humanizing the function by putting a name, face, and personality behind security and privacy measures can help convince consumers that the organization is truly, personally invested in securing their information.
But such exposure comes with its own set of risks. Elevating and celebrating the CISO could give cybercriminals an extra incentive to target the company—looking to specifically take down that figure.
Also see: 5 Cloud Security Trends in 2022
Best Practices for Today’s CISO: Earning TrustCISOs and aspiring CISOs would do well to prepare for the eventuality of life in the public eye. Here are some guidelines.
Use your Personal Brand for GoodThe most important aspect of building your personal brand is understanding its purpose. Why are you building your brand? What are you hoping to accomplish? Almost invariably, the answer is to build stakeholder trust.
Always Work Through the Lens of TrustTrust is earned in drips but lost in buckets. The unavoidable truth is that—if you’re a public figure—there is no such thing as off the record. You have to proceed under the assumption that the mic is always hot, and the camera app is always on “record.”
Before you speak, post, or act, ask yourself: Will this inspire trust or erode it? By the same token, remember that if the goal is to build trust, you need to maintain an open and honest approach with your audience.
Choose your PlatformEven though it’s called a “personal brand,” the lion’s share of your content will center around your professional expertise. As you endeavor to stand apart from the pack of fellow CISOs and would-be-CISOs, you’ll want to focus on educating a wider audience on a topic you feel is very important and yet not understood by many.
Set your Own BoundariesIf you find yourself asked to be a public figure on behalf of your company, remember that “showing your whole self” is a sliding scale. It does not mean you need to tweet that back-to-school picture of your fourth grader.
It might mean sharing some snaps of your new puppy if you’re comfortable with that. Or it might mean sharing a hobby that you’re passionate about. Remember, the goal is to help your audience understand the real you—but you decide where to let them in.
You Can’t Fake ItCreating a persona that is not true to you is a recipe for failure. It is not sustainable, and the world has become too interconnected with too many people having a microphone for you to successfully present a lie.
All it takes is one viral post from a friend or acquaintance who truly knows you to blow your cover, and in doing so destroy any trust you’ve cultivated.
Seek Expert HelpYou’re a CISO because you are an expert in information security—and that is where your focus can and should remain. When it comes to building and maintaining your brand, seek out the experts. If your company is pushing you to be “more public-facing,” ask what resources are available to you to help create content, maintain social media engagement, and secure (and prepare for) traditional media opportunities.
According to a report from Grand View Research, the global cybersecurity services market size is expected to reach USD 192.70 billion by 2028. As the field continues to expand, we may well see the day when it’s commonplace for CISOs to be Twitter verified.
Get ahead of the game by taking steps today to ready your personal brand—but never forget that the goal is not to get famous. Rather, it is to further business objectives and results by building, maintaining, and growing stakeholder trust.
Also see: Cybersecurity in 2022: Solving the Skills Gap
About the Author:
Prasad Jayaraman is a Principal in KPMG’s Advisory Services
The post The Successful CISO: How to Build Stakeholder Trust appeared first on eWEEK.
I spoke with Michael Liebow, Global Head of Atos OneCloud, about how legacy architecture plays such a core role even in today’s newest cloud deployments, and provides advice on optimizing cloud strategy
Listen to the podcast:
Watch the video:
The post Atos OneCloud’s Michael Liebow on Cloud Computing Challenges and Solutions appeared first on eWEEK.
The networking landscape has shifted dramatically over the past two years as remote work, cloud migration, and container-based architectures have augmented network infrastructure. This digital transformation has put added pressure on NetOps teams to gain visibility into on-premises, cloud, and hybrid environments to ensure performance of the entire network and applications, regardless of location.
As a result, traditional network performance monitoring (NPM) tools are no longer enough for organizations that want to proactively plan, monitor, and optimize their network services or that want to find and fix network performance problems quickly.
In fact, according to Gartner’s Market Guide for Network Performance Monitoring 2021, “by 2025, 60% of organizations will have seen a reduction in traditional network monitoring tool needs due to increases in remote work and cloud migration, as compared to 2021.”
In essence, organizations can no longer afford to have visibility gaps across infrastructure. To overcome this challenge, they need to ensure their infrastructure is equipped to handle issues both on and off premises as well as leverage a combination of data sources to provide a holistic end-to-end view of the entire network.
How is this done? Let’s dive into more details on the state of NPM and how it’s adapting to offer cloud and hybrid visibility.
Also see: The New Focus on CloudOps: How Enterprise Cloud Migration Can Succeed
Combining Data Sources to Identify ProblemsNPM tools leverage a combination of data sources, including network-device-generated traffic data; raw network packets; and network-device-generated health metrics and events to monitor, diagnose, and find performance issues. This includes giving NetOps teams forensic data to identify the root cause of performance issues and insights into the end-user experience.
Traditional NPM tools focus on the core network and data center, capturing information from within the traditional network perimeter. The traffic types, flow rates, and traffic patterns are almost a known factor except for network anomalies that happen.
Customers with on-premises network designs have a robust, scalable, and stable environment and, with the help of firewalls, reliable WAN edge devices and other network components. NPM tools are 99% aware of the possible issues that could happen, and they monitor and act based on this analysis.
With customers migrating to cloud and container-based architectures (and other technologies like SD-WAN and microservices), it’s become more difficult to capture traffic and isolate problems. Today, organizations need tools that can monitor LAN, WAN, and into the cloud, so let’s dive into each area that’s impacting this shift around visibility.
Redesigning Networks for Remote WorkIt’s no secret that the remote workforce has resulted in organizations redesigning network infrastructure, but these changes often don’t account for monitoring or visibility.
For example, with the increase in the remote workforce, the number of connections coming in through the VPN concentrators or firewall has increased tremendously for most large enterprises. These devices and network designs must be redesigned to accommodate this increase in scale, throughput, and number of user access licenses.
With more remote workers, enterprises are looking for NPM tools that can monitor and analyze traffic patterns, utilization, and application monitoring from the VPN concentrators. NPM tools are now available to read useful data (Flow, SNMP, API, etc.) from these devices to help analyze and monitor remote user traffic.
Cloud Migration and Cloud Native ComputingCloud migration has been happening for years, but due to recent events, has been accelerating at a staggering pace. The drive toward more cloud-enabled applications and services, which may not be owned by the organizations, further complicates monitoring and troubleshooting.
For example, there are cloud solutions offered by multiple cloud companies (like Google, Amazon, Azure) that also provide services and applications hosted on their portal or managed services by other vendors.
With this mix of vendors, the type of useful readable data for NPM tools to understand and monitor can be challenging. But NPM tools are now able to capture raw cloud data and convert to readable IPFix data, or via API to create useful reports for monitoring and analysis.
Some NPM tools have global or private agents deployed in the cloud at various sites, which make use of synthetic traffic for monitoring and analyzing network SLAs. Cloud vendors are also trying to add more ways (advanced API, service tags, etc.) for raw data to be easily accessible for NPM tools.
Container-Based ArchitecturesContainerization and microservices allow an organization to package software and its dependencies in an isolated unit, either on-premises or in the cloud. Having visibility into these services is crucial for managing performance across a user base, but it’s fundamentally different due to changes in traffic flow.
NPM tools need to be able to access the raw data, read this information, and analyze useful information from these containers and microservices to export this data into useful user reportable format. Each vendor has their own way to implementing how the containers or services are hosted on-premises or in the cloud, and the format in which raw data can be accessed. Most vendors have APIs to get access to this data; it’s just a matter of NPM tools to implement this API format to fetch this data.
Different cloud solutions have different ways the end-to-end solution is hosted. For example, deploying an NPM tool and reading raw data from Amazon AWS is completely different from hosting the tool on a Google GCP platform. The data formats provided by different vendors have different data variables, service names, region formats, etc. There are also large new enterprise private cloud solutions coming up (like SAP or Salesforce private cloud solutions), which add to the complexity of read data from hybrid cloud-hosted enterprises.
The shift to cloud and the changing network architecture is putting a premium on NPM solutions that work across all these technologies. This means capturing data that includes streaming telemetry, flow data, packet data, and SNMP, which can be used for such outputs as predictive analysis, real-time monitoring, AI/ML-assisted analytics, and historical analysis.
Enterprises now need to think about how NPM tools are planning to address new hosted technologies and migration from an on-premises design to a complete cloud solution.
About the Author:
Jubil Mathew is a Technical Engineer at LiveAction
The post Why NPM Tools Need to Work Across On-Prem, Cloud, and Hybrid Environments appeared first on eWEEK.
As we move into 2022, there’s little doubt that we’ll see the growth in intelligent automation (IA) continue to gather momentum as more decision makers realize the transformative power of this emerging technology. Building on this, a key shift we’ll see is the realization that only a truly unified workforce can deliver a business’s full potential. A unified workforce is one where humans and IA – intelligent digital robots – work as a single cohesive construct.
We’ve now moved past the fear paradigm of seeing digital labour as a risk and instead see it as a partner, or collaborator, and as a means to excel. As mindsets around digital transformation and the future of work progress, so will the technology itself. There has been an impressive growth in the capabilities of intelligent automation in recent years, boosted by advances in artificial intelligence (AI) and complementary technologies.
Moreover, robotic process automation (RPA) has become a key platform and business enabler for AI as businesses leverage AI to increase the sophistication of what can be automated and to support more complex automated interactions between digital workers and human workers and customers.
Throughout 2022, these developments will accelerate even further, continually adding to digital robots’ skill sets. The embedding of AI and machine learning (ML) into digital robots will enable businesses to make automations faster and cheaper to program and execute. This will increase the scope over which they can work, and ease the path toward being able to orchestrate digital robots as they expand across the enterprise. This, in turn, will lead to rapid scaling of automation programs to play increasingly strategic roles in businesses.
Autonomous Intelligent Automation: Business UsesAs intelligent automation becomes more “intelligent,” we’ll gradually be able to see other changes in the way the technology is used. For example, vendors will move toward developments such as autonomous IA, a form of intelligent automation that can be self-defining, self-managing, and self-healing.
The lines between digital process automation (DPA), iPaaS, no code, robotic process automation, and other forms of automation are blurring too. So we will most likely see a future trend toward IA platforms that provide the means to create, manage, and control operations that leverage multi-modal automation across both humans and digital robots.
Shifting mindsets and advancing technology might be heading this way, but what does this mean to businesses? Greater use of intelligent automation will increasingly lead to improved business outcomes for specific target markets and industries – see below. Solutions will become more tailored toward particular industries to deliver their own strategic business outcomes. These might include:
Intelligent automation will continue to become increasingly strategic—more focused on key corporate goals such as competitiveness, revenue growth, customer service, and market growth.
Also see: What is AIOps?
Select Industries on the ForefrontIt’s become clear that intelligent automation is applicable to almost all sectors but, heading into 2022, growth is expected to be particularly strong in a handful of industries.
Retail has only scratched the surface when it comes to intelligent automation, which offers a hugely efficient way to boost sales, increase customer engagement, and reduce costs.
Manufacturing will leverage intelligent automation to improve customer experience – a trend well under way – and help to swiftly adapt to regulatory changes, and manage complex supply chains.
Healthcare is clearly seeing accelerated usage of intelligent automation. Organizations have recognized its strength in improving interoperability and back-office efficiencies; providing financial stability; enhancing workforce satisfaction; and, most importantly, elevating patient experience.
Utilities are primed for strong growth in IA. Higher customer expectations, adaptations to tackle climate change, meeting regulatory compliance, aging infrastructure, and workforce are all factors that will drive utility businesses towards the use of intelligent automation throughout 2022.
Also see: Top Digital Transformation Companies
A Digital-First StrategyMore than anything, businesses will realize the need to make intelligent automation a strategic priority. Already, tactical and quick-win automations are a thing of the past, so businesses need to go further than they currently are in order to achieve the maximum ROI.
Organizations will approach their challenges at the highest level, orchestrating and rearranging the future management of work, leveraging all business application interfaces – be they legacy applications, modern apps or APIs – to work for their strategic business goals.
However, the deployment and integration of digital robots into the existing workforce is only half the story here. Organizations will give more consideration to how IA can augment their human counterparts’ workloads, enabling them to do more. As intelligent automation’s presence grows within organizations, companies will put greater emphasis on how this new unified workforce is orchestrated for competitive advantage.
In sum, businesses will feel less limited by organizational constructs by completely reimagining processes with a digital-first mindset.
Also see: DevOps, Low Code and RPA: Pros and Cons
About the Author:
Eric Tyree is the head of AI and research at Blue Prism
The post What Does 2022 Hold for Intelligent Automation? appeared first on eWEEK.
On Tuesday, January 18, at 11 AM PT, @eWEEKNews will host its monthly #eWEEKChat. The topic will be “Digital Transformation Trends,” and it will be moderated by James Maguire, eWEEK’s Editor-in-Chief.
We’ll discuss – using Twitter – the trends that will shape digital transformation in 2022. On a related note, how will data, AI, cloud computing and security be affected by digital transformation?
How to Participate: On Twitter, use the hashtag #eWEEKChat to follow/participate in the discussion. But it’s easier and more efficient to use the real-time chat room link at CrowdChat.
Instructions are on the Digital Transformation Trends Crowdchat page: Log in at the top right, use your Twitter handle to register. The chat begins promptly at 11 AM PT. The page will come alive at that time with the real-time discussion. You can join in or simply watch the discussion as it is created.
Special Guests, Digital Transformation TrendsThe list of experts for this month’s Tweetchat currently includes the following – please check back for additional expert guests:
Chat room real-time link: Go to the Crowdchat page. Sign in with your Twitter handle and use #eweekchat for the identifier.
Questions for the TweetchatThe questions we’ll tweet about will include – check back for more/ revised questions:
Go here for CrowdChat information.
#eWEEKchat Tentative Schedule for 2022*Jan. 18: Trends in Digital Transformation
Feb. 15: Navigating Multicloud Computing
March 15: Low Code / No Code Trends
April 12: Edge Computing: Monitoring, Observability and More
May 17: Data Analytics: Optimizing Your Practice
June 14: Expanding Your AI Deployment
*all topics subjects to change
The post eWEEK TweetChat, Jan. 18: Digital Transformation Trends appeared first on eWEEK.