TL Consulting Group

analytics

How Exploratory Data Analysis (EDA) Can Improve Your Data Understanding Capability

How Exploratory Data Analysis (EDA) Can Improve Your Data Understanding Capability Can EDA help to make my phone upgrade decision more precise? You may have heard the term Exploratory Data Analysis (or EDA for short) and wondered what EDA is all about. Recently, one of the Sales team members at TL Consulting Group were thinking of buying a new phone but they were overwhelmed by the many options and they needed to make a decision suited best to their work needs, i.e. Wait for the new iPhone or make an upgrade on the current Android phone. There can be no disagreement on the fact that doing so left them perplexed and with a number of questions that needed to be addressed before making a choice. What was the specification of the new phone and how was that phone better than their current mobile phone? To help enable curiosity and decision-making, they visited YouTube to view the new iPhone trailer and also learned more about the new iPhone via user ratings and reviews from YouTube and other websites. Then they came and asked us how we would approach it from a Data Analytics perspective in theory. And our response was, whatever investigating measures they had already taken before making the decision, this is nothing more but what ML Engineers/data analysts in their lingo call ‘Exploratory Data Analysis’. What is Exploratory Data Analysis? In an automated data pipeline, exploratory data analysis (EDA) entails using data visualisation and statistical tools to acquire insights and knowledge from the data as it travels through the pipeline. At each level of the pipeline, the goal is to find patterns, trends, anomalies, and potential concerns in the data. Exploratory Data Analysis Lifecycle To interpret the diagram and the iPhone scenario in mind, you can think of all brand-new iPhones as a “population” and to make its review, the reviewers will take some iPhones from the market which you can say is a “sample”. The reviewers will then experiment with that phone and will apply different mathematical calculations to define the “probability” if that phone is worth buying or not. It will also help to define all the good and bad properties of the new iPhone which is called “Inference “. Finally, all these outcomes will help potential customers to make their decision with confidence. Benefits of Exploratory Data Analysis The main idea of exploratory data analysis is “Garbage in, Perform Exploratory Data Analysis, possibly Garbage out.” By conducting EDA, it is possible to turn an almost usable dataset into a completely usable dataset. It includes: Key Steps of EDA The key steps involved in conducting EDA on an automated data pipeline are: Types of Exploratory Data Analysis EDA builds a robust understanding of the data, and issues associated with either the info or process. It’s a scientific approach to getting the story of the data. There are four main types of exploratory data analysis which are listed below: 1. Univariate Non-Graphical Let’s say you decide to purchase a new iPhone solely based on its battery size, disregarding all other considerations. You can use univariate non-graphical analysis which is the most basic type of data analysis because we only utilize one variable to gather the data. Knowing the underlying sample distribution and data and drawing conclusions about the population are the usual objectives of univariate non-graphical EDA. Additionally included in the analysis is outlier detection. The traits of population dispersal include: Spread: Spread serves as a gauge for how far away from the Centre we should search for the information values. Two relevant measurements of spread are the variance and the quality deviation. Because the variance is the root of the variance, it is defined as the mean of the squares of the individual deviations. Central tendency: Typical or middle values are related to the central tendency or position of the distribution. Statistics with names like mean, median, and sometimes mode are valuable indicators of central tendency; the mean is the most prevalent. The median may be preferred in cases of skewed distribution or when there is worry about outliers. Skewness and kurtosis: The distribution’s skewness and kurtosis are two more useful univariate characteristics. When compared to a normal distribution, kurtosis and skewness are two different measures of peakedness. 2. Multivariate Non-Graphical Think about a situation where you want to purchase a new iPhone solely based on the battery capacity and phone size. In either cross-tabulation or statistics, multivariate non-graphical EDA techniques are frequently used to illustrate the relationship between two or more variables. An expansion of tabulation known as cross-tabulation is very helpful for categorical data. By creating a two-way table with column headings that correspond to the amount of one variable and row headings that correspond to the amount of the opposing two variables, a cross-tabulation is preferred for two variables. All subjects that share an analogous pair of levels are then included in the counts. For each categorical variable and one quantitative variable, we create statistics for quantitative variables separately for every level of the specific variable then compare the statistics across the amount of categorical variable. It is possible that comparing medians is a robust version of one-way ANOVA, whereas comparing means is a quick version of ANOVA. 3. Univariate Graphical Different Univariate Graphics Imagine that you only want to know the latest iPhone’s speed based on its CPU benchmark results before you decide to purchase it. Since graphical approaches demand some level of subjective interpretation in addition to being quantitative and objective, they are utilized more frequently than non-graphical methods because they can provide a comprehensive picture of the facts. Some common sorts of univariate graphics are: Boxplots: Boxplots are excellent for displaying data on central tendency, showing reliable measures of location and spread, as well as information on symmetry and outliers, but they can be deceptive when it comes to multimodality. The type of side-by-side boxplots is among the simplest applications for boxplots. Histogram: A histogram, which can be a barplot

How Exploratory Data Analysis (EDA) Can Improve Your Data Understanding Capability Read More »

Data & AI, , ,

The Importance of Feature Engineering in ML Modelling

When building Machine Learning (ML) models, we often encounter unorganised and chaotic data. In order to transform this data into explainable features, we rely on the process of feature engineering. Feature engineering plays a crucial role in the Cross Industry Standard Process for Data Mining (CRISP-DM). It is an integral part of the Data Preparation Step, responsible for organising the data effectively before it is ready for modeling. The diagram below illustrates the significance of feature engineering (FE) in the data mining process. CRISP-DM Process Model What is Feature Engineering? Feature Engineering (FE) is the process of extracting and organising important information from raw data in such a way that it fits the machine learning (ML) model. Feature Engineering Process(FE) Source: https://www.omnisci.com/technical-glossary/feature-engineering) Why is Feature Engineering Important? Feature Engineering (FE) has many benefits to offer in the CRISP-DM process. They include: Provides more flexibility and less complexity in models Faster data processing Understanding of models becomes easier Better understanding of the problem and questions to be answered Feature Engineering Techniques for Machine Learning (ML) Below is a list of feature engineering techniques and we will summarise each: Imputation: Handling Outliers Log Transformation One-Hot Encoding Scaling 1. Imputation Missing values is one of the most typical problems when it comes to data preparation. Human errors and dataflow interruptions are some of the major contributors to this problem. Moreover, missing values can detrimentally impact the performance of the ML models. An example of an imputation of NA values with Zero Imputation is frequently employed in healthcare research, such as when dealing with patient records that may have missing values for certain medical measurements. By imputing the missing data using methods like mean imputation or regression imputation, researchers can ensure that a complete dataset is available for analysis, allowing for more accurate assessments and predictions. 2. Handling Outliers Handling Outliers within datasets is an important technique with the purpose of creating an accurate representation of the data. This step must be completed prior to the model training step. There are various methods of handling outliers that include removal, replacing values, capping, and discretization. These methods will be discussed in detail in future blogs. An example of outliers Handling outliers is essential in financial analysis, for instance, when examining stock market data. By detecting and appropriately treating outliers using techniques like Winsorization or trimming, analysts can ensure that extreme values do not unduly influence statistical measures, leading to more robust and reliable insights and decision-making. 3. Log Transformation Log Transformation is one of the most prevalent methods used by data professionals. The technique transforms a skewed distribution of data into normally distributed or slightly skewed data. Therefore, making the data approximate for normal applications is required for different kinds of data analysis. Examples of Log Transformed Data Log transformation is commonly applied in skewed data distributions, such as when dealing with income or population data. By taking the logarithm of the values, the skewed distribution can be transformed into a more symmetric shape, facilitating more accurate modeling, analysis, and interpretation of the data. 4. One Hot Encoding One-Hot Encoding is a technique of preprocessing categorical variables into ML models. The encoding transforms a category variable into a binary feature for each category. It typically assigns a value of ‘1’ to the binary feature it corresponds to and all other binary features are set to ‘0’. An example of One Hot Encoding One-hot encoding is widely used in categorical data processing, such as in natural language processing tasks like sentiment analysis. By converting categorical variables into binary vectors, each representing a unique category, one-hot encoding enables machine learning algorithms to effectively interpret and utilize categorical data, facilitating accurate classification and prediction tasks. 5. Scaling Feature scaling is one of the hardest problems in data science to get right. However, it is not a mandatory step for all machine learning models. It is only applicable to distance-based machine learning models. The training model process requires data with a known set of features that need to be scaled up or down where it is deemed appropriate. The outcome of the scaling operation transforms continuous data to be similar in terms of range. The most popular techniques for scaling are Normalization and Standardisation, which will be discussed in detail in future blogs. Examples of Scaling Scaling is often used in image processing, such as when resizing images for a computer vision task. Scaling the images to a consistent size, regardless of their original dimensions ensures that the images can be properly processed and analysed, allowing for fair comparisons and accurate feature extraction in tasks like object recognition or image classification. Feature Engineering Tools There are a set of feature engineering tools that are popular in the market in terms of the capabilities it provides. We have listed a few of our recommendations: FeatureTools AutoFeat TsFresh OneBM ExploreKit Conclusion In summary, Feature Engineering is a crucial step in the CRISP-DM process before we even think about training our machine learning models. One of the core advantages include the training time of models is reduced significantly. As a result, it allows for a drastic reduction of cost in terms of utilisation of expensive computing resources. In this article, we learned a number of feature engineering techniques and tools that are used in the industry. Here at TL Consulting, our data consultants are experts at using feature engineering techniques to build highly accurate machine learning models, enabling us to deliver high-quality outcomes to support our customer’s data analytics needs. TL Consulting provides advisory and transformation services in the data analytics & engineering domain and has helped many organisations achieve their digital transformation goals. Visit TL Consulting’s data-engineering page to learn more about our service capabilities and send us an enquiry if you’d like to learn more about how our dedicated consultants can help you.

The Importance of Feature Engineering in ML Modelling Read More »

Data & AI, , ,

Top Cloud Plays in 2023: Unlocking Innovation and Agility

Top Cloud Plays in 2023: Unlocking Innovation and Agility Cloud Computing has been around since the early 2000’s, while the technology landscape continues to evolve rapidly and adoption increased (20% CAGR), offering unprecedented opportunities for innovation and digital transformation. The meaning of digital transformation is also changing with cloud decision makers viewing Digital transformation as more than a “lift and shift”, instead they see vast opportunity within the Cloud ecosystems to help reinforce their long-term success. As businesses increasingly embrace cloud, certain cloud plays have emerged as key drivers of success, underpinned by companies including Microsoft, AWS, Google Cloud and VMWare who have all developed very strong technology ecosystems that have transitioned from a manual and costly Data Centre model. In this blog, we will explore the top cloud plays, from our perspective, that organisations should consider unlocking to reach their full potential in 2023. Multi-Cloud and Hybrid Cloud Strategies Multi-Cloud and Hybrid Cloud Strategies: Multi-cloud and hybrid cloud strategies have gained significant traction in 2023. Organisations are leveraging multiple cloud providers and combining public and private cloud environments to achieve greater flexibility, scalability, and resilience through their investment. Multi-cloud and hybrid cloud approaches allow businesses to choose the best services from different providers while maintaining control over critical data and applications. This strategy helps mitigate vendor lock-in leveraging Kubernetes Container orchestration, including AKS, EKS & GKE and VMWare Tanzu, optimise costs, and tailor cloud deployments to specific business requirements and use cases. Cloud-Native Application Development Cloud-Native Application Development: Cloud-native application development is a transformative cloud play that enables organisations to build and deploy applications, through optimised DevSecOps practices, specifically designed for advanced cloud environments. This model leverages containerization, CICD, microservices architecture, and orchestration platforms again emphasising Kubernetes, a strong Cloud Native foundational play. Cloud-native applications are designed to be highly scalable, resilient, and agile, allowing organisations to rapidly adapt to changing business needs. By embracing cloud-native development, businesses can accelerate time-to-market, improve scalability, and enhance developer productivity embedding strong Developer Experience (DevEx) practices. Serverless Computing Serverless computing: is a game-changer for businesses seeking to build applications without worrying about server management. With serverless computing, developers can focus solely on writing code while the cloud provider handles infrastructure provisioning and scaling. An example of this is Microsoft Azure Serverless Platform or AWS Lambda. This cloud play offers automatic scaling, cost optimisation, and event-driven architectures, allowing businesses to build highly scalable and cost-effective applications. Serverless computing simplifies development efforts, reduces operational overhead, and enables companies to quickly respond to changing application workloads. Cloud Security and Compliance Cloud security and compliance: are critical cloud plays that organisations cannot afford to overlook in 2023 particularly with recent data breaches at Optus and Medicare. Leveraging security as a foundational element of your cloud native journey is crucial for ensuring the protection, integrity, and compliance of your applications and data. Cloud providers offer robust security frameworks, encryption services, identity and access management solutions, and compliance certifications. By leveraging these cloud security products and practices, businesses can enhance their data protection, safeguard customer information, and ensure regulatory compliance. Strong security and compliance measures build trust, mitigate risks, and protect organisations from potential data breaches. Data Analytics and Machine Learning:  Data analytics and machine learning (ML) are powerful cloud plays that drive data-driven decision-making and unlock actionable insights. Cloud providers offer advanced analytics and ML services that enable businesses to leverage their data effectively. By harnessing cloud-based data analytics and ML capabilities, businesses can gain valuable insights, predict trends, automate processes, and enhance customer experiences. These cloud plays empower organisations to extract value from their data, optimize operations, and drive innovation while providing an enhanced customer experience. As the evolution of Cloud Native, Multi-Cloud and Hybrid Cloud Strategies accelerate, strategically adopting the above drivers help enable innovation, agility, and business growth. Importantly Multi-cloud and hybrid cloud strategies provide enhanced security, flexibility, while cloud-native application development empowers rapid application deployment and better developer experience (DevEx), leveraging DevSecOps and Automation practices. These are critical initiatives to consider, if you are looking to advance your technology ecosystem and migrate and/or port workloads for optimum flexibility and Return on Investment (ROI). It is evident the traditional “lift and shift strategy” does not provide this level of value to the consumer. Instead, the above “on-demand cloud plays” may not be realised, with inefficient cloud resource management and unexpected expenses, leading to increased OPEX and TCO. By embracing these top cloud plays, it enables businesses investing in innovation to develop and deploy applications that can scale seamlessly on Cloud, adapting to changing customer demands, reduce TCO/ OPEX, accelerate time-to-market, maintain high availability and security, while future proofing themselves in this competitive digital landscape. For more information about Cloud, Cloud-Native, Data Analytics and more, visit our services page.

Top Cloud Plays in 2023: Unlocking Innovation and Agility Read More »

Cloud-Native, Data & AI,

Top 5 Data Engineering Techniques in 2023

Top 5 Data Engineering Techniques in 2023 Data engineering plays a pivotal role in unlocking the true value of data. From collecting and organising vast amounts of information to building robust data pipelines, it is a complex and vital capability that is becoming more prevalent in today’s complex technology world. There are various intricacies in data engineering, while exploring its challenges, techniques, and the crucial role it plays in enabling data-driven decision making. In this blog post, we explore the top 5 trending data engineering techniques that are expected to make a significant impact in 2023. TL Consulting see Data engineering as an essential discipline that plays a critical role in maximising the value of key data assets. In recent years, several trends and technologies have emerged, shaping the field of data engineering, and offering new opportunities for businesses to harness the power of their data. These techniques enable better and more efficient management of data, unlocking valuable insights and helping enable innovation in a more targeted manner. Since Data engineering is a rapidly evolving domain, there is a continuous need to introduce new data engineering techniques and technologies to handle the increasing volume, variety, and velocity of data. Data Engineering Techniques DataOps One such trend is DataOps, an approach that focuses on streamlining and automating data engineering processes leveraging agile software engineering and DevOps. By implementing DataOps principles, organisations can achieve collaboration, agility, and continuous integration and delivery in their data operations. This approach enables faster data processing and analysis by automating data pipelines, version controlling data artefacts, and ensuring the reproducibility of data processes aligning to DevOps and CICD practices. DataOps improves quality, reduces time-to-insights, and enhances collaboration across data teams while promoting a culture of continuous improvement. DataMesh Another significant trend is Data Mesh, which addresses the challenges of scaling data engineering in large enterprises. DataMesh emphasises domain-oriented ownership of data and treats data as a product. By adopting DataMesh, organisations can establish cross-functional data teams, where each team is responsible for a specific domain and the associated data products. This approach promotes “self-service” data access through a data platform capability, empowering domain experts to manage and govern their data. Furthermore, as the data mesh gains adoption and evolves, with each team that shares their data as products, enabling data-driven innovation. Data Mesh enables scalability, agility, and improved data quality by distributing data engineering responsibilities across the organisation. Data Streaming Real-time data processing has also gained prominence with the advent of data streaming technologies. Data streaming allows organisations to process and analyse data as it arrives, enabling immediate insights and the ability to respond quickly to dynamic business conditions. Platforms like Apache Kafka, Apache Flink, Azure Stream Analytics and Amazon Kinesis provide scalable and fault-tolerant streaming capabilities. Data engineers leverage these technologies to build real-time data pipelines, facilitating real-time analytics, event-driven applications, and monitoring systems to further. This type of capability can lead to optimised real-time stream processing and can gain valuable insights into understanding of customer behaviours and trends. These insights can help you make timely and informed decisions to drive your business growth. Machine Learning The intersection of data engineering and machine learning engineering has become increasingly important. Machine learning engineering focuses on the deployment and operationalisation of machine learning models at scale. Data engineers collaborate with data scientists to develop scalable pipelines that automate the training, evaluation, and deployment of machine learning models. Technologies like TensorFlow Extended (TFX), Kubeflow, and MLflow are utilised to operationalise and manage machine learning workflows effectively. Real-time data streaming offers numerous benefits and empowers you to make informed business decisions. Data Catalogs Lastly, from our experience, Data Catalogs and metadata management solutions have become crucial for managing and discovering data assets. As data volumes grow, organising and governing data effectively becomes challenging. Data cataloguing enables users to search and discover relevant datasets and helps create a single source of knowledge for understanding business data. Metadata management solutions facilitate data lineage tracking, data quality monitoring, and data governance, ensuring data assets are well-managed and trusted. Data cataloguing accelerates analysis by minimising the time and effort that analysts spend finding and preparing data. These trends and technology advancements are reshaping the data engineering landscape, providing organisations with opportunities to optimise their data assets, accelerate insights, and make data-driven decisions with confidence. By embracing these developments, understanding your data assets and associated value, can lead to smarter informed business decisions. By embracing these trending techniques, organisations can transform their data engineering capabilities to enable some of the following benefits: Accelerated data-driven decision-making. Enhanced customer insights, transparency and understanding of customer behaviours. Improved agility and responsiveness to market trends. Increased operational efficiency and cost savings. Mitigated risks through robust data governance and security measures. Data engineering is vital for optimising organisational data assets since these are an important cornerstone of any business. It ensures data quality, integration, and accessibility, enabling effective data analysis and decision-making. By transforming raw data into valuable insights, data engineering empowers organisations to maximize the value of their data assets and gain a competitive edge in the digital landscape. TL Consulting specialises in data engineering techniques and solutions that drive transformative value for businesses enabling the above benefits. We leverage our expertise to design and implement robust data pipelines, optimize data storage and processing, and enable advanced analytics. Partner with us to unlock the full potential of your data and make data-driven decisions with confidence. Visit TL Consulting’s data services page to learn more about our service capabilities and send us an enquiry if you’d like to learn more about how our dedicated consultants can help you.

Top 5 Data Engineering Techniques in 2023 Read More »

Data & AI,

Aligning the Correct Data Analytics Model to your Business

Aligning the correct data analytics model to your business needs can lead to a significant return on investment, increased business growth, and better alignment to your business strategy. In addition to financial returns, analytics, and AI can be used to fine-tune business processes and day-to-day operations. In order to leverage the power of data analytics correctly, it’s important for organisations to standardise the way they identify the business questions that need to be answered. Today, many organisations are moving at a rapid pace which sometimes requires timely business decisions to be made. These decisions are sometimes based on the intuition and experience of the business decision-makers, by their current understanding of the business landscape. For data analytics to play a successful role in shaping these decisions, the data presented to the business should add weight and enrichment to ensure the decisions to be made are backed by facts. For this to occur successfully, data analysts need to work cohesively with the business to ensure there is a strong alignment to the business strategy and ensure the right questions are being asked. Taking a holistic approach would help the organisation establish the right process to identify the underlying business problems and then take the appropriate actions, using a data-driven decision-making approach.   4 Major Questions to ask your business A good data analytics model should be aligned to answering a set of business questions to fulfill business requirements. In addition, it’s important for data analysts and data scientists to understand what metrics and KPIs the business needs to measure. What was the cause of the problem? (Reports) Why did it happen? (Diagnosis) What will happen in the future? (Predictions) What is the best way forward? (Recommendations)   What is Data Analytics? Data analytics is the process of utilising quantitative methods to derive actionable insights from data to make informed decisions. There are 4 primary methods of data analysis: Analytics Models deployed in various industries   Type of Analytics: Descriptive   Industry: Education Many LMS platforms and learning systems offer descriptive analytical reporting capabilities with the aim of helping businesses and institutions measure learner performance to ensure that training goals and targets are met. Descriptive Analytics was used to track course enrollments, and course compliance rates, record which learning resources are accessed, collate course survey results, and identify the length of time that learners took to complete a course among other activities Type of Analytics: Diagnostic   Industry: Retail A retail store that sells eco-friendly products noticed a recent surge in revenue from one state. During discovery, the company learned that the surge was driven by a leap in sales of a single product. Research revealed the causal relationship: the state’s governor had signed a law making plastic shopping bags illegal, causing sales of reusable bags to soar. Type of Analytics: Predictive   Industry: E-commerce E-commerce websites are predicting customer preferences and recommend products to customers based on past purchases and search history using state-of-the-art artificial intelligence algorithms. Type of Analytics: Prescriptive   Industry: Insurance Insurance companies want to observe clients who want fast and reliable customer service online. Based on the pricing and premium information for clients, they are prescribing the right pricing and premium information using AI models. However, there are considerations regarding privacy-enhancing technologies (PETS) that allow the AI models to train on homomorphic encrypted data by taking data privacy into account. Businesses can easily adopt a data analytics model to enhance the way they do business. Here is an example of a data analytics lean canvas model encompassing an end-to-end solution shared below:   Conclusion In conclusion, for organisations to extract meaningful insights from their data to make the right decisions about their business, the correct data analytics model eliminates guesswork and manual tasks, be it choosing the right content or developing the right products to your customer needs. TL Consulting provides advisory and transformation services in the data analytics & engineering domain and can help your business design and implement the correct-fit data analytics model aligned to your business needs and transformation goals. Read through our data engineering and data platforms page to learn more about our service capabilities and send us an enquiry if you’d like to learn more about how our dedicated consultants can help you.

Aligning the Correct Data Analytics Model to your Business Read More »

Data & AI,