8 C
New York
April 26, 2024
Worship Media
Technology

15 hot tech skills getting hotter — no certification required

There’s no debate that hot IT certifications are worthy add-ons for tech professionals trying to boost their job prospects. But the problem with certs is they are mostly limited to infrastructure roles and related technology products. The vendors of these products are obligated to train their customers in how to use them and offer certifications for this purpose that are arguably easy to obtain. This leaves countless tech skills for which either there are no certifications available or it just doesn’t matter to employers eager to place a value on these skills and offer extra cash to workers who acquire expertise in them. 

Since 2000, Foote Partners has tracked and reported cash pay premiums paid to tech workers for 1,101 certified and noncertified tech skills in its quarterly-updated IT Skills and Certifications Pay IndexTM (ITSCPI). 593 of them are without certification. That’s a lot of skills, and the survey demographics behind the ITSCPI equally impressive: 80,186 U.S. and Canadian tech professionals in as many as 3,604 private and public sector employers who are earning dollars for their certified and noncertified skills, typically outside of base pay, as reported to us by their employers.  

High paying and going higher

The following noncertified tech skills meet two prerequisites: they are earning workers cash pay premiums well above the average of all skills reported plus they recorded gains in cash market value in the first six months of 2020. No skill below is earning less than the equivalent of 16 percent of base salary—significant considering the average for all skills reported is 9.6 percent of base. They are listed in descending ranked order of first, cash premium earned and second, amount of market value increase (including ties).

Not surprising, the list contains a number of security, coding, database, analytics and artificial intelligence related skills.

There’s no debate that hot IT certifications are worthy add-ons for tech professionals trying to boost their job prospects. But the problem with certs is they are mostly limited to infrastructure roles and related technology products. The vendors of these products are obligated to train their customers in how to use them and offer certifications for this purpose that are arguably easy to obtain. This leaves countless tech skills for which either there are no certifications available or it just doesn’t matter to employers eager to place a value on these skills and offer extra cash to workers who acquire expertise in them. 

Since 2000, Foote Partners has tracked and reported cash pay premiums paid to tech workers for 1,101 certified and noncertified tech skills in its quarterly-updated IT Skills and Certifications Pay IndexTM (ITSCPI). 593 of them are without certification. That’s a lot of skills, and the survey demographics behind the ITSCPI equally impressive: 80,186 U.S. and Canadian tech professionals in as many as 3,604 private and public sector employers who are earning dollars for their certified and noncertified skills, typically outside of base pay, as reported to us by their employers.  

High paying and going higher

The following noncertified tech skills meet two prerequisites: they are earning workers cash pay premiums well above the average of all skills reported plus they recorded gains in cash market value in the first six months of 2020. No skill below is earning less than the equivalent of 16 percent of base salary — significant considering the average for all skills reported is 9.6 percent of base. They are listed in descending ranked order of first, cash premium earned and second, amount of market value increase (including ties).

Not surprising, the list contains a number of security, coding, database, analytics and artificial intelligence related skills.

1. DevSecOps

Market Value Increase: 5.6 percent (in the six months through July 1, 2020)              

DevSecOps is the philosophy of integrating security practices within the DevOps process and involves creating a ‘Security as Code’ culture with ongoing, flexible collaboration between release engineers and security teams. It’s a natural and necessary response to the bottleneck effect of older security models on the modern continuous delivery pipeline. The goal is to bridge traditional gaps between IT and security while ensuring fast, safe delivery of code. Silo thinking is replaced by increased communication and shared responsibility of security tasks during all phases of the delivery process.

In DevSecOps, two seemingly opposing goals — “speed of delivery” and “secure code”— are merged into one streamlined process, and this make it valuable to employers. In alignment with lean practices in agile, security testing is done in iterations without slowing down delivery cycles. Critical security issues are dealt with as they become apparent, not after a threat or compromise has occurred. Six components comprise a DevSecOps approach:

  • Code analysis – deliver code in small chunks so vulnerabilities can be identified quickly.
  • Change management – increase speed and efficiency by allowing anyone to submit changes, then determine whether the change is good or bad.
  • Compliance monitoring – be ready for an audit at any time (which means being in a constant state of compliance, including gathering evidence of GDPR compliance, PCI compliance, etc.).
  • Threat investigation – identify potential emerging threats with each code update and be able to respond quickly.
  • Vulnerability assessment – identify new vulnerabilities with code analysis, then analyze how quickly they are being responded to and patched.
  • Security training – train software and IT engineers with guidelines for set routines.

2. Security architecture and models

 Market Value Increase: 5.6 percent (in the six months through July 1, 2020)

Two fundamental concepts in computer and information security are the security model, which outlines how security is to be implemented— in other words, providing a “blueprint”— and the security architecture of a computer system, which fulfills this blueprint.  Security architecture is a view of the overall system architecture from a security point and how the system is put together to satisfy the security requirements. It describes the components of the logical hardware, operating system, and software security components, and how to implement those components to architect, build and evaluate the security of computer systems. With cybersecurity related skills gaining prominence and the threat landscape continuing to be a core business issue, we expect security models and architecting skills to continue to be strong going forward.

3. RStudio

Market Value Increase: 21.4 percent (in the six months through July 1, 2020) 

RStudio is an integrated development environment for R, a programming language for statistical computing and graphics, and for Python. It is available in two formats, RStudio Desktop and web browser-accessible RStudio Server running on a remote server. RStudio is partly written in the C++ programming language and uses the Qt framework for its graphical user interface, however a bigger percentage of the code is written in Java and JavaScript.  The keys for RStudio’s popularity for analyzing data in R include:

  • R is open source. It’s free which is an advantage against paying for MATLAB or SAS licenses. This is also important if you’re working with global teams in areas where software is expensive of in inaccessible. It also means that R is actively developed by a community and there are regular updates.
  • R is widely used. R is used in many subject areas (not just bioinformatics) making it more likely for finding help online when it’s needed.
  • R is powerful. R runs on multiple platforms (Windows/MacOS/Linux). It can work with much larger datasets than popular spreadsheet programs like Microsoft Excel, and because of its scripting capabilities it is more reproducible. There are thousands of available software packages for science, including genomics and other areas of life science.

4. [Tie] Cryptography; Natural language processing; Neural Networks and Master data management

Market Value Increase: 6.3 percent (in the six months through July 1, 2020)       

 Cryptography (or cryptology) is the practice and study of techniques for secure communication in the presence of third parties called adversaries. More generally, cryptography is about constructing and analyzing protocols that prevent third parties or the public from reading private messages. Modern cryptography exists at the intersection of the disciplines of mathematics, computer science, electrical engineering, communication science, and physics and includes various aspects of information security such as data confidentiality, data integrity, authentication, and non-repudiation. Applications of cryptography include electronic commerce, chip-based payment cards, digital currencies, computer passwords, and military communications.

Human language doesn’t speak in zeros and ones, but there’s a lot of benefit and productivity that can be gained when machines are taught to read, decipher, understand, and make sense of the human language in a manner that is valuable.

That’s the goal of natural language processing, usually shortened as NLP. Early efforts at this include pieces of digital assistants like Alexa, Microsoft Cortana, Google Assistant, and Siri. It’s the driving force behind such common applications as Google Translate, the grammatical checking in Microsoft Word, and Interactive Voice Response (IVR) applications used in call centers. NLP is also essential when it comes to working with many types of unstructured data such as the data in electronic health records, emails, text messages, transcripts, social media posts — anything with a language component. It’s through NLP that we can get to more advanced technologies such as sentiment analysis.

NLP involves applying algorithms to identify and extract the natural language rules such that the unstructured language data is converted into a form that computers can understand.

When the text has been provided, computers utilize algorithms to extract meaning associated with every sentence and collect the essential data from them.  Many different classes of machine-learning algorithms have been applied to natural-language-processing tasks. These algorithms take as input a large set of “features” that are generated from the input data. Thus, NLP has evolved into research focused on statistical models which make soft, probabilistic decisions based on attaching real-valued weights to each input feature. These models have the advantage that they can express the relative certainty of many different possible answers rather than only one, producing more reliable results when such a model is included as a component of a larger system.

Systems based on machine-learning algorithms have many advantages and they all are driving NLP forward as a hot skill area to invest in. Consider the following.

  • Learning procedures used during machine learning automatically focus on the most common cases, whereas when writing rules by hand it is often not at all obvious where the effort should be directed.
  • Automatic learning procedures can make use of statistical inference algorithms to produce models that are robust to unfamiliar input (e.g. containing words or structures that have not been seen before) and to erroneous input (e.g. with misspelled words or words accidentally omitted). NLP’s advantage is that creating systems of handwritten rules that make soft decisions is extremely difficult, error-prone and time-consuming.
  • Systems based on automatically learning the rules can be made more accurate simply by supplying more input data. There is a limit to the complexity of systems based on handcrafted rules, beyond which the systems become more and more unmanageable. But creating more data to input to machine-learning systems simply requires a corresponding increase in the number of man-hours worked, generally without significant increases in the complexity of the annotation process.

Neural networks are a set of algorithms, modeled loosely after the human brain, that are designed to recognize patterns. They interpret sensory data through a kind of machine perception, labeling or clustering raw input. The patterns they recognize are numerical, contained in vectors, into which all real-world data, be it images, sound, text or time series, must be translated and they help cluster and classify. You can think of them as a clustering and classification layer on top of the data you store and manage. They help to group unlabeled data according to similarities among the example inputs, and they classify data when they have a labeled dataset to train on. Neural networks can also extract features that are fed to other algorithms for clustering and classification; you can think of deep neural networks as components of larger machine-learning applications involving algorithms for reinforcement learning, classification and regression.)

Because of their ability to reproduce and model nonlinear processes, neural networks have found applications in many disciplines—with many more to follow as employers continue to build on these capabilities and acquire or develop tech skills internally to execute everything. Here are examples of applications already in play:

  • System identification and control (e.g. vehicle control, trajectory prediction, process control) 
  • Quantum chemistry
  • Pattern recognition (e.g. radar systems, face identification, signal classification,3D reconstruction, object recognition)
  • Sequence recognition (gesture, speech, handwritten and printed text) 
  • Medical diagnosis (e.g. various cancers)
  • Natural disaster infrastructure reliability analysis
  • Finance (e.g. automated trading systems) 
  • Data mining and visualization 
  • Machine translation
  • Social network filtering
  • Building black-box models (e.g. geoscience: hydrology, ocean modelling and coastal engineering, and geomorphology. ANNs have been e
  • Cybersecurity (e.g. discriminating between legitimate and malicious activities, penetration testing, botnet detecting, credit cards fraudsand network intrusions.
  • General game playing

Master data management (MDM) arose out of the necessity for businesses to improve the consistency and quality of their key data assets, such as product data, asset data, customer data, location data, etc. Many businesses today, especially global enterprises, have hundreds of separate applications and systems where data that crosses organizational departments or divisions can easily become fragmented, duplicated and most commonly out of date. When this occurs, accurately answering even the most basic but critical questions about any type of performance metric or KPI for a business becomes hard. The basic need for accurate, timely information is acute and as sources of data increase, managing it consistently and keeping data definitions up to date so all parts of a business use the same information is a never-ending challenge. That’s what has and will continue to drive a premium on MDM skills.

8. [Tie] Cloud Foundry & Cloudera Impala               

Market Value Increase: 14.3 percent (in the six months through July 1, 2020) 

Cloud Foundry is an open source, multi-cloud application platform as a service (PaaS). Unlike most other cloud computing platform services — which are tied to particular cloud providers — Cloud Foundry is a container-based architecture running apps in any programming language over a variety of cloud service providers. If desired, you can deploy it on AWS, but you can also host it yourself on your own OpenStack server, or through HP Helion or VMware vSphere. Cloud Foundry is promoted for continuous delivery as it supports the full application development lifecycle, from initial development through all testing stages to deployment. Its architecture runs apps in any programming language over a variety of cloud service providers, allowing developers to use the cloud platform that suits specific application workloads and move those workloads as necessary within minutes with no changes to the application.

Cloud Foundry is optimized to deliver fast application development and deployment; highly scalable and available architecture; DevOps-friendly workflows; a reduced chance of human error; Multi-tenant compute efficiencies. Key benefits of Cloud Foundry that powerits popularity include:

  • Application portability.
  • Application auto-scaling.
  • Centralized platform administration.
  • Centralized logging.
  • Dynamic routing.
  • Application health management.
  • Integration with external logging components like Elasticsearch and Logstash.
  • Role-based access for deployed applications.
  • Provision for vertical and horizontal scaling.
  • Infrastructure security.
  • Support for various IaaS providers

Cloudera Impala is an open source Massively Parallel Processing (MPP) query engine that provides high-performance, low-latency SQL queries on data stored in popular Apache Hadoop file formats. The fast response for queries enables interactive exploration and fine-tuning of analytic queries rather than long batch jobs traditionally associated with SQL-on-Hadoop technologies, meaning that data can be stored, shared, and accessed using various solutions that avoids data silos and minimizes expensive data movement. Impala returns results typically within seconds or a few minutes, rather than the many minutes or hours that are often required for Hive queries to complete. We cannot understate the value of this to advanced data analytics platforms and the work of data scientists and analysts engaged in Big Data initiatives and the impact this has on skills acquisition demand going forward.

10. [Tie] Apache Cassandra; Artificial Intelligence; Cyber Threat Intelligence; Data Analytics; Google TensorFlow and Predictive Analytics and Modeling

Market Value Increase: 6.7 percent (in the six months through July 1, 2020)

Apache Cassandra is a highly scalable, high-performance distributed NoSQL database management system designed to handle large amounts of data across many commodity servers, providing high availability with no single point of failure. Cassandra offers robust support for clusters spanning multiple datacenters, with asynchronous masterless replication across cloud service providers, allowing low latency operations for all clients. It can handle petabytes of information and thousands of concurrent operations per second across hybrid cloud environments. Cassandra offers the distribution design of Amazon Dynamo with the data model of Google’s Bigtable.

Aside from being a backbone for Facebook and Netflix, Cassandra is a very scalable and resilient database that is easy to master and simple to configure, providing neat solutions for quite complex problems. Event logging, metrics collection and evaluation, monitoring the historical data — all of these tasks are quite hard to accomplish correctly, given the variety of OS’s, platforms, browsers and devices both startup products and enterprise systems face in their daily operations.

Important advantages driving the popularity of Cassandra:

  • Helps solve complicated tasks with ease (e.g. event logging, metrics collection, performing queries against the historical data
  • Has a short learning curve
  • Lowers admin overhead and costs for a DevOps engineer
  • Rapid writing and lightning-fast reading
  • Extreme resilience and fault tolerance

Artificial Intelligence (aka A.I.) is a term that means different things to different people, from robots coming to take your jobs to the digital assistants in your mobile phone and home. But it is actually a term that encompasses a collection of technologies that include machine learning, deep learning, natural language processing, computer vision, and more. Artificial intelligence can also be divided into ‘narrow A.I.’ and ‘general A.I.’. Narrow A.I. is the kind we most often see today – A.I. suited for a narrow task. This could include recommendation engines, navigation apps, or chatbots. These are A.I.s designed for specific tasks. Artificial general intelligence is about a machine performing any task that a human can perform, and this technology rapidly expanding though still relatively aspirational for many organizations.

Machine learning is typically the first step for organizations that are adding A.I.-related technologies to their IT portfolio and one of the reasons why A.I. skills pay is growing. This is about automating the process of creating algorithms by using data to “train” them rather than human software developers writing code. Basically, what you are doing is showing the algorithm examples, in the form of data. By “looking” at all these examples, the machine learns to recognize patterns and differences.

Deep learning takes machine learning a few steps further by creating layers of machine learning beyond the first decision point. These hidden layers are called a neural network—as described earlier–and are meant to simulate the way human brains operate. Deep learning works by taking the outcome of the first machine learning decision and making it the input for the next machine learning decision. Each of these is a layer. Python is also the language of deep learning and neural networks. 

 Cyber Threat Intelligence is what cyber threat information becomes once it has been collected, evaluated in the context of its source and reliability, and analyzed through rigorous and structured tradecraft techniques by those with substantive expertise and access to all-source information. Like all intelligence, cyber threat intelligence provides a value-add to cyber threat information, which reduces uncertainty for the consumer, while aiding the consumer in identifying threats and opportunities. It requires that analysts identify similarities and differences in vast quantities of information and detect deceptions to produce accurate, timely, and relevant intelligence.

Rather than being developed in an end-to-end process, the development of intelligence is a circular process, referred to as the intelligence cycle. In this cycle requirements are stated; data collection is planned, implemented, and evaluated; the results are analyzed to produce intelligence; and the resulting intelligence is disseminated and re-evaluated in the context of new information and consumer feedback. The analysis portion of the cycle is what differentiates intelligence from information gathering and dissemination. Intelligence analysis relies on a rigorous way of thinking that uses structured analytical techniques to ensure biases, mindsets, and uncertainties are identified and managed. Instead of just reaching conclusions about difficult questions, intelligence analysts think about how they reach the conclusions. This extra step ensures that, to the extent feasible, the analysts’ mindsets and biases are accounted for and minimized or incorporated as necessary.

The process is a cycle because it identifies intelligence gaps, unanswered questions, which prompt new collection requirements, thus restarting the intelligence cycle. Intelligence analysts identify intelligence gaps during the analysis phase. Intelligence analysts and consumers determine intelligence gaps during the dissemination and re-evaluation phase.

In cyber threat intelligence, analysis often hinges on the triad of actors, intent, and capability, with consideration given to their tactics, techniques, and procedures (TTPs), motivations, and access to the intended targets. By studying this triad it is often possible to make informed, forward-leaning strategic, operational, and tactical assessments.

  • Strategic intelligence assesses disparate bits of information to form integrated views. It informs decision and policy makers on broad or long-term issues and/or provides a timely warning of threats. Strategic cyber threat intelligence forms an overall picture of the intent and capabilities of malicious cyber threats, including the actors, tools, and TTPs, through the identification of trends, patterns, and emerging threats and risks, in order to inform decision and policy makers or to provide timely warnings.
  • Operational intelligence assesses specific, potential incidents related to events, investigations, and/or activities, and provides insights that can guide and support response operations. Operational or technical cyber threat intelligence provides highly specialized, technically-focused, intelligence to guide and support the response to specific incidents; such intelligence is often related to campaigns, malware, and/or tools, and may come in the form of forensic reports.
  • Tactical intelligence assesses real-time events, investigations, and/or activities, and provides day-to-day operational support. Tactical cyber threat intelligence provides support for day-to-day operations and events, such as the development of signatures and indicators of compromise (IOC). It often involves limited application of traditional intelligence analysis techniques.

Data analytics is the process of examining data sets in order to draw conclusions about the information they contain, increasingly with the aid of specialized systems and software. Data analytics technologies and techniques are widely used in commercial industries to enable organizations to make more-informed business decisions and by scientists and researchers to verify or disprove scientific models, theories and hypotheses.

Data analytics initiatives can help businesses increase revenues, improve operational efficiency, optimize marketing campaigns and customer service efforts, respond more quickly to emerging market trends and gain a competitive edge over rivals — all with the ultimate goal of boosting business performance. Depending on the particular application, the data that’s analyzed can consist of either historical records or new information that has been processed for real-time analytics uses. In addition, it can come from a mix of internal systems and external data sources.

TensorFlow is a popular open-source deep learning library developed at Google, which uses machine learning in all of its products to take advantage of their massive datasets and improving the search engine, translation, image captioning and recommendations. TensorFlow is also used for machine learning applications such as neural networks. Its flexible architecture allows for the easy deployment of computation across a variety of platforms (CPUs, GPUs, TPUs), and from desktops to clusters of servers to mobile and edge devices. TensorFlow provides stable Python and C APIs without API backwards compatibility guarantees for C++, Go, Java, JavaScript and Swift. Third-party packages are available for C#, Haskell, Julia, R, Scala, Rust,OCaml and Crystal.

Python has always been the choice for TensorFlow due to the language being extremely easy to use and having a rich ecosystem for data science including tools such as Numpy, Scikit-learn, and Pandas.

Predictive Analytics and Modeling is a process that uses data and statistics to predict outcomes with data models. These models can be used to predict anything from sports outcomes and TV ratings to technological advances and corporate earnings. Predictive modeling is also often referred to as:

  • Predictive analytics
  • Predictive analysis
  • Machine learning

These synonyms are often used interchangeably. However, predictive analytics most often refers to commercial applications of predictive modeling, while predictive modeling is used more generally or academically. Of the terms, predictive modeling is used more frequently. Machine learning is also distinct from predictive modeling and is defined as the use of statistical techniques to allow a computer to construct predictive models. In practice, machine learning and predictive modeling are often used interchangeably. However, machine learning is a branch of artificial intelligence, which refers to intelligence displayed by machines.

Predictive modeling is useful because it gives accurate insight into any question and allows users to create forecasts. To maintain a competitive advantage, it is critical to have insight into future events and outcomes that challenge key assumptions.

Analytics professionals often use data from the following sources to feed predictive models:

  • Transaction data
  • CRM data
  • Customer service data
  • Survey or polling data
  • Digital marketing and advertising data
  • Economic data
  • Demographic data
  • Machine-generated data (for example, telemetric data or data from sensors)
  • Geographical data
  • Web traffic data

Also see:

14 IT certifications that will survive and thrive in the pandemic
Best Places to Work in IT 2020
Tech Resume Library: 16 downloadable templates for IT pros
* Career roadmap: Cloud engineer
* 2020 IT hiring trends

Click Here to Visit Orignal Source of Article https://www.idginsiderpro.com/article/3565179/15-hot-tech-skills-getting-hotter-no-certification-required.html#tk.rss_all

Related posts

Wayback Wednesday: He’ll figure it out

ComputerWorld

14 IT certifications that will survive and thrive in the pandemic

ComputerWorld

Rethinking collaboration: 6 vendors offer new paths to remote work

ComputerWorld

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More

Privacy & Cookies Policy