The Top 10 Artificial Intelligence Trends Everyone Should Be Watching In 2020

Artificial Intelligence (AI) has undoubtedly been the technology story of the 2010s, and it doesn’t look like the excitement is going to wear off as a new decade dawns.

The past decade will be remembered as the time when machines that can truly be thought of as “intelligent” – as in capable of thinking, and learning, like we do – started to become a reality outside of science fiction.

While no prediction engine has yet been built that can plot the course of AI over the coming decade, we can be fairly certain about what might happen over the next year. Spending on research, development, and deployment continues to rise, and debate over the wider social implications rages on. Meanwhile, the incentives only get bigger for those looking to roll out AI-driven innovation into new areas of industry, fields of science, and our day-to-day lives.

Here are my predictions for what we’re likely to see continue or emerge in the first year of the 2020s.

1. AI will increasingly be monitoring and refining business processes

While the first robots in the workplace were mainly involved with automating manual tasks such as manufacturing and production lines, today’s software-based robots will take on the repetitive but necessary work that we carry out on computers. Filling in forms, generating reports and diagrams and producing documentation and instructions are all tasks that can be automated by machines that watch what we do and learn to do it for us in a quicker and more streamlined manner. This automation – known as robotic process automation – will free us from the drudgery of time-consuming but essential administrative work, leaving us to spend more time on complex, strategic, creative and interpersonal tasks.

2. More and more personalization will take place in real-time

This trend is driven by the success of internet giants like Amazon, Alibaba, and Google, and their ability to deliver personalized experiences and recommendations. AI allows providers of goods and services to quickly and accurately project a 360-degree view of customers in real-time as they interact through online portals and mobile apps, quickly learning how their predictions can fit our wants and needs with ever-increasing accuracy. Just as pizza delivery companies like Dominos will learn when we are most likely to want pizza, and make sure the “Order Now” button is in front of us at the right time, every other industry will roll out solutions aimed at offering personalized customer experiences at scale.

3. AI becomes increasingly useful as data becomes more accurate and available

The quality of information available is often a barrier to businesses and organizations wanting to move towards AI-driven automated decision-making. But as technology and methods of simulating real-world processes and mechanisms in the digital domain have improved over recent years, accurate data has become increasingly available. Simulations have advanced to the stage where car manufacturers and others working on the development of autonomous vehicles can gain thousands of hours of driving data without vehicles even leaving the lab, leading to huge reductions in cost as well as increases in the quality of data that can be gathered. Why risk the expense and danger of testing AI systems in the real world when computers are now powerful enough, and trained on accurate-enough data, to simulate it all in the digital world? 2020 will see an increase in the accuracy and availability of real-world simulations, which in turn will lead to more powerful and accurate AI.

4. More devices will run AI-powered technology

As the hardware and expertise needed to deploy AI become cheaper and more available, we will start to see it used in an increasing number of tools, gadgets, and devices. In 2019 we’re already used to running apps that give us AI-powered predictions on our computers, phones, and watches. As the next decade approaches and the cost of hardware and software continues to fall, AI tools will increasingly be embedded into our vehicles, household appliances, and workplace tools. Augmented by technology such as virtual and augmented reality displays, and paradigms like the cloud and Internet of Things, the next year will see more and more devices of every shape and size starting to think and learn for themselves. 

5. Human and AI cooperation increases

More and more of us will get used to the idea of working alongside AI-powered tools and bots in our day-to-day working lives. Increasingly, tools will be built that allow us to make the most of our human skills – those which AI can’t quite manage yet – such as imaginative, design, strategy, and communication skills. While augmenting them with super-fast analytics abilities fed by vast datasets that are updated in real-time.

For many of us, this will mean learning new skills, or at least new ways to use our skills alongside these new robotic and software-based tools. The IDC predicts that by 2025, 75% of organizations will be investing in employee retraining in order to fill skill gaps caused by the need to adopt AI. This trend will become increasingly apparent throughout 2020, to the point where if your employer isn’t investing in AI tools and training, it might be worth considering how well placed they are to grow over the coming years.  

6. AI increasingly at the “edge”

Much of the AI we’re used to interacting with now in our day-to-day lives takes place “in the cloud” – when we search on Google or flick through recommendations on Netflix, the complex, data-driven algorithms run on high-powered processors inside remote data centers, with the devices in our hands or on our desktops simply acting as conduits for information to pass through.

Increasingly, however, as these algorithms become more efficient and capable of running on low-power devices, AI is taking place at the “edge,” close to the point where data is gathered and used. This paradigm will continue to become more popular in 2020 and beyond, making AI-powered insights a reality outside of the times and places where super-fast fiber optic and mobile networks are available. Custom processors designed to carry out real-time analytics on-the-fly will increasingly become part of the technology we interact with day-to-day, and increasingly we will be able to do this even if we have patchy or non-existent internet connections.

7. AI increasingly used to create films, music, and games

Some things, even in 2020, are probably still best left to humans. Anyone who has seen the current state-of-the-art in AI-generated music, poetry or storytelling is likely to agree that the most sophisticated machines still have some way to go until their output will be as enjoyable to us as the best that humans can produce. However, the influence of AI on entertainment media is likely to increase. This year we saw Robert De Niro de-aged in front of our eyes with the assistance of AI, in Martin Scorsese’s epic The Irishman, and the use of AI in creating brand new visual effects and trickery is likely to become increasingly common.

In videogames, AI will continue to be used to create challenging, human-like opponents for players to compete against, as well as to dynamically adjust gameplay and difficulty so that games can continue to offer a compelling challenge for gamers of all skill levels. And while completely AI-generated music may not be everyone’s cup of tea, where AI does excel is in creating dynamic soundscapes – think of smart playlists on services like Spotify or Google Music that match tunes and tempo to the mood and pace of our everyday lives.

8. AI will become ever more present in cybersecurity

As hacking, phishing and social engineering attacks become ever-more sophisticated, and themselves powered by AI and advanced prediction algorithms, smart technology will play an increasingly important role in protecting us from these attempted intrusions into our lives. AI can be used to spot giveaway signs that digital activity or transactions follow patterns that are likely to be indicators of nefarious activity, and raise alarms before defenses can be breached and sensitive data compromised.

The rollout of 5G and other super-fast wireless communications technology will bring huge opportunities for businesses to provide services in new and innovative ways, but they will also potentially open us up to more sophisticated cyber-attacks. Spending on cybersecurity will continue to increase, and those with relevant skills will be highly sought-after.

9. More of us will interact with AI, maybe without even knowing it

Let’s face it, despite the huge investment in recent years in natural-language powered chatbots in customer service, most of us can recognize whether we’re dealing with a robot or a human. However, as the datasets used to train natural language processing algorithms continue to grow, the line between humans and machines will become harder and harder to distinguish. With the advent of deep learning and semi-supervised models of machine learning such as reinforcement learning, the algorithms that attempt to match our speech patterns and infer meaning from our own human language will become more and more able to fool us into thinking there is a human on the other end of the conversation. And while many of us may think we would rather deal with a human when looking for information or assistance, if robots fill their promise of becoming more efficient and accurate at interpreting our questions, that could change. Given the ongoing investment and maturation of the technology powering customer service bots and portals, 2020 could be the first time many of us interact with a robot without even realizing it. 

10. But AI will recognize us, even if we don’t recognize it

Perhaps even more unsettlingly, the rollout of facial recognition technology is only likely to intensify as we move into the next decade. Not just in China (where the government is looking at ways of making facial recognition compulsory for accessing services like communication networks and public transport) but around the world. Corporations and governments are increasingly investing in these methods of telling who we are and interpreting our activity and behavior. There’s some pushback against this – this year, San Francisco became the first major city to ban the use of facial recognition technology by the police and municipal agencies, and others are likely to follow in 2020. But the question of whether people will ultimately begin to accept this intrusion into their lives, in return for the increased security and convenience it will bring, is likely to be a hotly debated topic of the next 12 months.

This article was written by Bernard Marr on:

Operationalizing AI and ML with Marketing

When AI practitioners talk about taking their machine learning models and deploying them into real-world environments, they don’t call it deployment. Instead the term that’s used is “operationalizing”. This might be confusing for traditional IT operations managers and applications developers. Why don’t we deploy or put into production AI models? What does AI operationalization mean and how is it different from the typical application development and IT systems deployment?

One of the unique things about an AI project versus a traditional application development project is that there isn’t the same build / test / deploy / manage order of operations. Rather there are two distinct phases of operation: a “training” phase and an “inference” phase. The training phase involves the selection of one or more machine learning algorithms, the identification and selection of appropriate, clean, well-labeled data, the application of the data to the algorithm along with hyperparameter configurations to create an ML model, and then the validation and testing of that model to make sure that it can generalize properly without too much overfitting of training data or underfitting for generalization. All of those steps comprise just the training phase of an AI project.

On the other hand, the inference phase of an AI project focuses on the application of the ML model to the particular use case, ongoing evaluation to determine if the system is generalizing properly to real-world data, and adjustments to the model, development of new training set data, and hyperparameter configurations to iteratively improve the model. The inference phase can also be used to determine if there are additional use cases for the ML model that are broader than originally specified with the training data. In essence, the training phase happens in the organizational “laboratory” and the inference phase happens in the “real world”.

But the real world is where things get messy and complicated. First of all, there’s no such thing as a single platform for machine learning. The universal machine learning / AI platform doesn’t exist because there are so many diverse places in which we can use an ML model to make inferences, do classification, predict values, and all the other problems we are looking for ML systems to solve. We could be using an ML model in an Internet of Things (IoT) device deployed at the edge, or in a mobile application that can operate disconnected from the internet, or in a cloud-based always-on setting, or in a large enterprise server system with private, highly regulated, or classified content, or in desktop applications, or in autonomous vehicles, or in distributed applications, or… you get the picture. Any place where the power of cognitive technology is needed is a place where these AI systems can be used.

This is both empowering and challenging. The data scientist developing the ML model might not have any expectations for how and where the ML model will be used, and so instead of “deploying” this model to a specific system, it needs to be “operationalized” in as many different systems, interfaces, and deployments as necessary. The very same model could be deployed in an IoT driver update as well as a cloud service API call. As far as the data scientists and data engineers are concerned, this is not a problem at all. The specifics of deployment are specific to the platforms on which the ML model will be used. But the requirements for the real-world usage and operation (hence the word “operationalization”) of the model are the same regardless of the specific application or deployment.

Requirements for AI Operationalization

Many of the early cognitive technology projects were indeed laboratory-style “experiments” that aimed to identify areas where AI could potentially help, but were never put into production. Many of these efforts were small-scale experiments run by data science organizations. However, to provide real value for the organization, these experiments need to move out of the laboratory and be real, reliable production models. This means that the tools used by data scientists for laboratory-style experiments are not really appropriate for real-world operations.

First and foremost, real-world, production, inference-phase AI projects need to be owned and managed by the line of business or IT operations that are responsible for the problem that the cognitive technology is solving. Is the AI model trained for fraud analysis? Is it classifying images for content moderation? Is the technology used for security-application facial detection? Is it creating content for social media posts? If so, then the data science organization responsible for crafting the model needs to hand it off to the organization responsible for those activities. This means there needs to be a place where the business or IT organization can monitor, manage, govern, and analyze the results of the ML models to make sure that it’s meeting their needs.

The biggest thing these organizations need to realize is that cognitive technologies are probabilistic by their very nature. This means it is guaranteed that ML models will not produce a 100% certain result. How will these organizations deal with these almost-but-not-quite-certain results? What is the threshold by which they will accept answers and what is the fall-back for less-than-certain results? These are things to be considered in the operationalization of ML models in the inference phase that are not relevant during the training phase.

An additional consideration for inference-phase operationalization of ML models is that they need to operate on data sets that are not going to necessarily be as clean as the training sets. Good training sets are clean, de-duped, and well-labeled. Perhaps you’ve trained your system to recognize characters on checks for an image-based deposit system. But in the real world, those images could be of very poor quality with bad lighting, poor resolution images, with shadows, improperly aligned images, and things in the way. How will the operational ML model deal with this bad data? What additional logic needs to be wrapped around the AI system to handle these bad inputs to avoid bad predictive outputs?

Another operationalization requirement to consider during the inference phase is the compute power necessary to run the model with satisfactory response time. Training compute requirements are not the same as inference-phase requirements. Deep learning based supervised learning may require an intense amount of compute power and heaps of data and images to create a satisfactory model, but once the model is created the operational compute requirements might be significantly lower. On the flip side, you might be using a simple K-Nearest Neighbors (KNN) algorithm that implements a “lazy” form of learning, requiring little compute at training time but potentially lots of computing horsepower at inference time. Clearly understanding, planning, and providing the right compute power for the inference phase is a critical operationalization requirement.

Challenges of AI Operationalization

The challenge of meeting these operationalization requirements is that the tools, data, and practices of the data science organization are not the same tools, data, and practices of the operationalization organization. These organizations often work with their own proprietary tools, such as data notebooks and data science-oriented tools on the data science side during the training phase, and runtime environments, big data infrastructure, and IDE-based development ecosystems on the operations side. There’s no easy bridging of these technologies and so what happens is a struggle to make the transition from the training phase in the laboratory to the inference phase in the real world.

Another challenge is that of interpretability / explainability. Many ML algorithms, especially neural networks, can provide high accuracy at the expense of low explainability. That is to say that the model will provide results but with no clear explanation as to why or how it derived at those results. While this might be acceptable in the training phase, the line of business might not be willing to accept the determinations of a “black box” that is denying loan applications, classifying potentially fraudulent activities, or transcribing voice into speech without explainability. In this light, operationalization of ML models might also require a tandem process by which results can either be explained or there’s an ensemble model somewhere that provides a path of interpretability.

Yet another challenge has to do with data ownership. Many data science personnel feel content working with local data sets on their personal machines, but this doesn’t work well in an operational context. The result is that these organizations operate in silos, leading to significant inefficiencies and duplicated data. While some seek the perfect platform to bridge this silo, the truth is that no such perfect platform exists. Instead, the answer is optimized processes and procedures that focus not just on AI model development and training or on model operationalization, but also on the critical transition between these two phases. Understanding all the complexity and planning for this is part of what makes for successful AI, ML, and cognitive technology projects.  

This article was originally posted on –

Nine AI Marketing Trends Set To Explode In 2020

Using artificial intelligence in marketing has made the lives of agency professionals easier in many ways. Having chatbots that can come up with their own appropriate responses or voice-based searches for consumers helps manage the volume of users interacting with a brand efficiently.

The coming year is likely to see several innovations in AI. Additionally, we are likely to note several improvements in the AI systems that we already use. AI marketing trends are expected to make quite an impact on how agencies do business in 2020.

To help us understand what we can expect from the expanding field of AI in marketing, nine experts from Forbes Agency Council offer their insight into what potential AI trends will explode on the marketing scene in 2020.

1. Buyer Profiling

AI now powers customer segmentation, push notifications, click tracking, retargeting and content creation. Marketers are using AI for product and campaign recommendations. They will also be able to use the intelligence for profiling buyers based on behaviors and actions. Additionally, in 2020 marketers will continue to use AI to improve customer service. – Alex Membrillo, Cardinal Digital Marketing

2. Insights-Driven Retention And Loyalty

A clear and powerful opportunity for marketers to create value for consumers and stakeholders is to leverage AI for customer retention and loyalty. This holds particularly true for retailers, manufacturers with direct-to-consumer platforms and with strong customer relationship management programs where combining data from multiple sources can produce meaningful insights to understand how to drive back customers into the funnel. – Luigi MatroneeBusiness Institute

3. Real-Time Customer Interaction Across Every Channel

One of the exciting possibilities AI holds for marketing and retail marketing in particular is its ability to manage real-time customer interactions across every channel. Many times, the difference between success and failure hinges on how well brands can modify their tactics in response to customer feedback. For brands willing to listen, AI promises a treasure trove of meaningful customer data. – Mary Ann O’Brien, OBI Creative

4. Real-Time Marketing

Many interesting trends are coming our way, but it looks like real-time marketing is going to be a thing in 2020. Gathering and analyzing customer data already helps marketers understand exactly how customers behave. I think AI will be used more and more as a means to enhance the efforts and deliver a perfectly tailored message to the right customer at the right time. – Solomon Thimothy, OneIMS

5. Voice-First Search

With an ever-growing suite of voice-powered AI like Siri and Cortana, it’s no surprise that we are on the fast track toward voice-first search queries. With this in mind, marketers in 2020 will need to capitalize on this trend by tailing their content and SEO strategies around how consumers speak — think conversational phrases like “How do you make pizza at home?” instead of “pizza recipe.” – Adam Binder, Creative Click Media

6. Automation And Ad Targeting Tools

Marketing automation emerged as a business buzzword in 2019. Now, with more startups bubbling up in the niche, 2020 will be no different. Combined with advanced ad targeting tools that will effectively locate your target audience and significantly lower the corporate budget, AI will continue to reign over the marketing and business world. – Ashar Jamil, Digitally Up

7. A Personalized Approach

When it comes to customer engagement, timing is everything. According to our recent research, 65% of consumers expect that within five years marketing emails will be fully tailored to them at a given moment. This tells us that send time optimization (STO), using historical data to send digital messages to individuals when they are most likely to engage, will have an increased impact in 2020. – Camille Nicita, Gongos, Inc.

8. Developing Content Strategies

Although creative, writers often find it difficult and time-consuming to develop a content strategy, especially one backed by data. A- powered tools can develop strategies backed by real results much quicker than an individual can. With so much content out there and only so much time to create more, this is pivotal to marketing. – Charles MazziniHyperlinks Media, LLC

9. Brand Association And Referral

In 2020, brand association and referral through AI is going to be something to keep an eye on. As chatbots and voice assistants get more sophisticated, the anticipation of needs combined with understood behavior will provide more opportunities for digital marketers and consumers. It’s going to be less about “if you like this” and more about “have you thought about this.” – Bo Bothe, BrandExtract, LLC

This article was originally posted on –

AI in marketing: How to find the right use cases, people and technology

Most marketers already know they can capitalize on artificial intelligence (AI) to make more informed decisions, better engage their target audiences, and drive revenue for their organizations.

Yet, according to a Demandbase survey released in 2019, only 18% of B2B marketers and sales professionals are currently using the tech.

The same study also found that 67% of marketers expect higher lead quality from AI, and 56% believe the technology can help yield better engagement with customers and prospects.

So, what’s holding marketers back from using it?

While marketers recognize the value that the tech can deliver, they often lack the perfect combination of prioritized sweet-spot use cases, people/organizational capacity, and technology to effectively execute an AI strategy.

Unfortunately, by not mastering this trio, marketers are putting themselves—and their companies—at risk of becoming obsolete.

Experts from McKinsey & Company predict that AI technologies could lead to a substantial performance gap between front-runners (who fully absorb artificial intelligence tools across their enterprises) and non-adopters or partial adopters by 2030.

AI front-runners are projected to potentially double cash flow by 2020, with implied net cash-flow growth of roughly 6% for through 2030, while non-adopters “might experience around a 20% decline in cash flow from today’s levels.”

To avoid falling behind and to begin reaping the benefits, every marketer must prioritize identifying the best-fit use cases, hiring and/or developing the right people, and implementing the right technology in the year ahead.

The AI landscape is littered with failed projects, so here’s what to keep in mind to increase your likelihood of success:

Identifying the best-fit AI use cases

While there may be hundreds of AI use cases that a marketer will eventually want to execute on, marketers should first map out their top candidates according to two dimensions: value and feasibility.

It’s okay to first think big, but then you need to narrow the list.

Among the common use cases are the following: intelligent chatbots, smarter personalized digital advertising, content generation and curation, AI-powered account or lead scoring, AI-assisted email responses, multi-channel marketing attribution, next best action, customer lifetime value, and sentiment analysis.

Marketers should estimate the value delivered for each use case (potential upside revenue, time-to-market, reduced manual labor, customer satisfaction), as well as time and effort it will take to see actionable results.

If the use case isn’t both highly valuable and highly feasible – and if you don’t know how you’ll act on the predictive results – then it should be taken off the short-term wish list.

Marketers who are unsure of where to start should consider assessing the value of these common high-impact applications:

  • Optimizing advertising spend: Marketers spend billions of dollars a year on advertising, but often have no way of quantifying whether these investments are worthwhile. With AI, marketers can more accurately attribute sales to specific advertising initiatives, enabling them to optimize their spend to bring in more leads for less resources.
  • Enhancing customer experiences: AI can empower marketers to hone in on their customers’ preferences and create personalized experiences based on past buying and browsing behavior. Not only does this enhance the customers’ perception of the brand, but it can also lead to increased sales—especially when they are recommended a product they hadn’t previously considered.
  • Predicting and mitigating customer churn: Customer retention teams often have limited resources and aren’t able to dedicate the same level of attention to every customer. To solve for this, marketers can implement an AI solution that discovers patterns in historical customer activity to accurately predict which customers are likely to leave them for a competitor. Using this information, the team can better focus retention efforts on the customers that are most at risk and offer them incentives to remain loyal.

Once marketing teams have identified the processes they want to apply AI to, they can start to identify the individuals who will lead the implementations and the technologies they need to bring those use cases to life.

Hiring or developing the right people

The skillsets of the modern-day marketer are fast-evolving.

With the number of digital customer touchpoints that marketers need to manage—which includes everything from desktops and mobile devices, to social media and beyond—marketers need to consume, analyze, and leverage endless amounts of data to inform decisions.

That data is especially crucial for fueling valuable AI applications; without it, the systems won’t have the necessary information they need to generate mission-critical insights—such as predicting consumer behavior or creating truly personalized content.

It’s no surprise then that Marketing Land’s January 2019 Digital Agency Survey found 72% of agency marketers said data science and analysis will be the most in-demand technical skills in the coming years, followed by conversion rate optimization (59%), and computer science/AI and technical SEO (52% each).

Unfortunately, those skills are hard to come by; according to Indeed, the number of individuals searching for AI-related jobs decreased by 14.5% from May 2018 to May 2019. They also found that demand for data scientists increased by 344% from 2013 to 2019, yet the talent pool grew by just 14% in 2018.

Although the talent shortage certainly presents challenges for marketers, there are ways around it. Marketers can identify internal “citizen data scientists.”

These are individuals who possess deep domain knowledge and have a strong analytics background, but not formal data science training.

With the right tools and training, citizen data scientists can get up to speed on the organization’s AI strategy quickly.

Additionally, marketers should consider hiring an AI consultant to support their initiatives or looking to their platform provider for guidance on AI strategies in the near-term while they work on adding AI to their marketing DNA and building it as a competency over the longer-term.

Implementing the right AI technology

Regardless of the use case, there are different approaches marketers can take to leverage AI in marketing processes.

Marketers know well that there are some 7,000+ different vendor tools that could be leveraged in a martech stack, and an exponentially increasing number of those incorporate some AI, or at least claim to do so.

The most common approach taken by marketers today is to leverage AI that comes built-into a martech tool and that is optimized for just that one-point solution or capability.

That means marketers might have 10 different AI tools for ten different capabilities, but that’s the most frequent approach today that gets fast time-to-market without having to hire or develop the AI competency in-house on day one.

While having those point solutions may work today for certain problems, the reality is that some of the highest value problems in marketing or customer loyalty can’t be solved by a point tool.

Use cases such as next best offer, cross-sell/up-sell, churn prediction and reduction, customer experience optimization, price elasticity modeling, customer satisfaction, and others require a broader enterprise solution.

To that end, finding the right AI technology or platform backed by some business transformation help is absolutely critical to marketers’ AI success.

Here are three considerations for success when selecting AI technologies:

  • Automated creation of machine learning models, without requiring coding or data science tools. Not only does this enable non-data scientists to deploy their own models, but it also frees up the experts from the repetitive tasks model building creates, allowing them to use their unique expertise for selecting and fine-tuning models to meet marketing needs. Those steps include preparing the data, modifying it to improve the models, diversifying the algorithms, and more.
  • Monitoring of how models are performing. This is crucial to ensuring the success of the algorithms, as a monitoring component can identify and solve for performance issues, infrastructure challenges, and changes in data. Without the ability to monitor and manage deployments, it’s likely that the AI models will eventually fail.
  • Trusted, explainable AI. Marketers should only invest in an AI tool if it’s human-friendly and the AI can be explainable—in other words, is a “white box” solution. Otherwise, they won’t have any insight into the decisions their algorithm is making and why those decisions are being made. As a result, the algorithm might be inadvertently biased, which could lead to compromised brand reputation and a loss of consumer trust—both of which were top AI bias concerns for the more than 350 U.S. and U.K. executives polled in this recent survey.

The impact of AI is being felt across all industries, and the savviest marketers are prioritizing getting their AI strategies in motion to maintain their organizations’ competitive advantage.

But in the AI-driven era, it’s not enough for marketers to be interested in AI; to be truly successful, they’ll need to think critically about the processes, people, and technology that will be core to their AI missions.

Those that master that combination will be easy to identify, as their organizations will dominate for years to come.

This article was originally posted on –