Get Personalization Wrong! It's Personal!

"Kickoff the Jim Harbaugh era with an autographed Michigan Wolverines Harbaugh T-Shirt", said the pop-up ad on the eCommerce site I had just logged on to.  It was a very personalized message designed to target a college football fan like me, on a site selling, among other things, college football merchandise at the beginning of the Big Ten football season.

It was perfect except for one thing – I went to The Ohio State University!

For the uninitiated, presenting a personalized message hawking Michigan Wolverines merchandise, like the one above, to an Ohio State Buckeye is like waving a red cape in front of a bull.  It triggers responses that are fueled by one of the storied college football rivalries dating back over a hundred years. These are not pleasant responses and certainly not the kind that the person who designed this promotion was hoping for.

So what went wrong?  It could be any one of a number of things from insufficient user profile data to incorrect or conflicting personalization rules.

One thing however is certain – the personalization implemented on the site was not adequately tested.

Personalization tools and platforms providing a myriad of ways to present relevant content and promotions to users on an eCommerce site have abounded in the past few years.  They have at their disposal a variety of data – volunteered and inferred, about the user to build personalization rules and drive specific digital experiences.  This same variety of data and complexity of rules makes the task of testing personalization that much more complex, and the likelihood that the testing is shortchanged that much more real.

It does not have to be so. Here are three things you can do to make sure that the personalization you build on your site works correctly:

Simulate all attributes that drive personalization in your testing: Your personalization rules depend on a lot of attributes (data) about your user.  Make sure that your test data includes all combinations of these to verify the personalization rules.  Pay particular attention to test data that drive negative tests – if the person testing the site mentioned above had used test data corresponding to a Buckeye in addition to a Wolverine, they would have caught the problem before it made it to the public site.  Testing for large combinations of test data is easier and significantly more economical if you use test automation that supports data driven testing.

Test the integration of the personalization engine and the website:Unit testing the personalization rules are important, but do not skip the step of testing the website after the personalization has been integrated.  Testing the web site from the user interface will show you exactly what the user experiences and will help you identify details that can be easily missed in unit testing.
Enable people designing personalization rules to perform the tests themselves: It is obvious that the person designing the personalization rules knows how they should work and is best equipped to come up with test scenarios to make sure that they work correctly.   So why not equip them to run these tests themselves right after the personalization rules are built.  They can validate the results and make sure that the personalization works just the way they intend it to.  By moving this testing process closer to the experts and testing often, you can reduce the time and cost required to test your personalization.

I would like to add a final note of caution when dealing with personalization that uses inferred data like geo-location or language in addition to user profile data.  Make sure that your test data includes these variations to tease out potential conflicts in rules between the two.  I am unsure if the site above uses this in its personalization, but as I look forward to the football season this year, I have a note of caution to the person testing this personalization – I may live in the state of the Scarlet Knights, but I still bleed Scarlet and Gray!

Winter Is Coming! A Call to Action for Software Testing

As I look forward to the start of Season 6 of the HBO Series, Game of Thrones in April, the motto of House Stark - Winter is Coming! both a warning and a call to action to maintain constant vigilance to prepare for tough times ahead, is beginning to take a strangely familiar tone.

Falling oil prices, stock market turmoil and Unicorns crashing down from the skies could well be harbingers of tough times ahead – Winter. And we may do well to heed warnings to remain vigilant.

The need for Vigilance

Public companies facing falling stock prices are beginning to cut costs to boost balance sheets. Falling commodity prices are also going to reduce the amount of cash available for investment by sovereign funds and large investment funds. IT budget cuts and the prospect of delayed or smaller future funding rounds will force those responsible for building and managing software to look for more efficient ways to deploy their capital. This vigilance to achieve capital efficiency has to start at the top, in the C-Suite.

One of the lasting outcomes of the last winter in the tech economy in the US was a significant increase in offshoring as a way of achieving capital efficiency. Until then, offshoring was seen as something only large enterprises did, but it became a must have for even startups and mid sized companies. Investors emphasized the need for a ‘lower cost’ development center as a way to stretch their investment dollars.

But winters pass, seasons change and we tend to forget the travails of the cold as we bask in the warmth of fat balance sheets flush with cash and easy funding.

The vigilance that was the norm during winter has been relaxed and at times forgotten as we saw unicorns emerge and rise to the skies. We will however be wise to bring that vigilance back so that we can continue to thrive when the frost sets in.

With most software testing done offshore, labor costs already are at an all time low. Steps to bring greater efficiency in the deployment of capital for testing have to involve something other than just moving jobs to lower cost economies. These steps must include a level of automation that takes humans out of performing low value grunt tasks in testing and frees them to perform activities that provide a higher value. Activities like test design and planning require a level of domain expertise and subject matter knowledge, and when done right, contribute significantly more towards improving software quality.

Freeing up your testing teams to pursue the high-value tasks means you have to first get them out of doing the mundane manual testing that they have been used to.

The challenge is two-fold. Firstly, building and maintaining test automation has typically required specialized skills and expensive tools. Tools you may not have the capital budget to buy and skills you many not have in-house. Secondly the learning curve for manual testers to take up test automation has been extremely steep and one that most are unable to navigate. This dual challenge of high upfront capital costs and dealing with a workforce that cannot transition to a new future has kept inefficient testing practices going for far too long.

There is a new reality in test automation today! A reality that when adopted will not just allow us to survive the winter, but thrive as well.

A New Reality

It is possible today to build and manage test automation without using expensive and proprietary tools that require upfront capital expenditure. Legacy tools that carry the burden of per-user or node locked licenses, and tools that are primarily workstation-centric in building the running test automation are being replaced by nimble SaaS and Cloud based testing platforms. These are easy to use and support the current reality of distributed software development and testing teams. Out-of-the box and standards-based integration with DevOps tools available in these platforms ensure that product management teams, development and testing teams work in sync. Further by replacing the upfront capital expenditure required for software licenses with pay-as-you-go subscriptions these platforms help companies conserve cash and provide a true measure of productivity and ROI in testing operations.

Move spending on tools from capital to operational budgets to conserve cash

While unit tests built by developers work well to test the building blocks of software, testing real user journeys and business workflows require a great deal of understanding of the business. The new test automation platforms are making it possible to build automation to test these user journeys and business workflows using people that understand the business – subject matter experts and domain experts. By eliminating the dependence on programmers for designing and building this test automation, companies can significantly reduce the cost associated with test automation. Further by helping companies use their manual testers that understand the business to build the automation, these platforms help transition an existing testing workforce to a new reality.

Make it easy for your existing manual testing teams to adopt test automation. Your testing workforce will be more efficient and productive helping you stay competitive during tough times.

Finally the testing platforms in this new reality also help companies improve their software development and testing processes by providing them with valuable and actionable insights about their software quality. Insights powered by analytics and detailed reports on test results that are readily available to all stakeholders without having to spend long hours or precious resources crunching numbers. Test automation is critical to obtain the data that powers these insights. The greater the extent of test automation, the better the insights to improve software quality.

A Call to Action

At the current time in the Game of Thrones, it has been almost nine years since the last winter and most of the key characters in the plot were too young to remember it. They are engrossed in their current squabbles to realize the larger threat to the realm that winter brings. It is up to the older characters that have experienced previous winters to caution them to be prepared or perish.

Nine years – In our present context that would be 2007. How would those of us that have been through the last economic winter remember it and caution each other to prepare for the next one.

It is not a matter of if, but when – Winter Is Coming!

Changing Face of Retail

Tuesday, November 3rd 2015, a store opened in a sub-urban shopping mall. An everyday occurrence in cities around the country that would normally not have garnered much attention – except this was no ordinary store. opened its first brick-and-mortar store in open-air shopping mall at University Village outside Seattle, WA.

The fact that this was a bookstore had a lot of people scratching their heads. The company that single-handedly led to the demise of a number of independent bookstores (and some chains like Borders), the company whose founder had famously said that the bookstore was redundant, was actually opening a physical bookstore. Media reaction was mixed, while some thought Amazon was going the way of Apple and Microsoft, others were convinced it wanted to add insult to injury by muscling out the remaining few independent bookstores.

I think there is more to this move by Amazon.

Amazon built a business for itself by eliminating the inherent inefficiencies in the supply chain in some business, while using a new (at that time) medium for engaging with its customers. That model worked well and once Amazon had established its self as the leading player in the industries that (until then) were really inefficient, it moved on to others and continued to repeat the model. That was over two decades ago.

The face of retail has changed.

Amazon’s success has pushed most retailers to add an online presence to their physical stores. These two avatars of the business however were and still are largely run independent of each other with the customer’s experience significantly differing in each channel.

The inefficiencies that existed in the supply chain two decades ago may be a thing of the past, but the ones that exist in the way customers are engaged across the multiple channels that a retailer uses to interact with its customers are glaring.

Amazon has set its sights on this inefficiency. Omni-channel strategies for marketing, customer engagement and operating a business in general are in their infancy and have a long way to go. Vendors are offering tools to build omni-channel marketing programs, but building a winning strategy and executing on it is not easy.

Amazon’s brick-and-mortar store is its first step to showcase its learning and thought leadership in this area. The fact that Amazon calls its store a ‘physical extension of’ should be evidence enough.

Books are just a start. Be prepared to have this strategy expand to other areas as Amazon learns and continues to innovate from this initial experience.

Here is a company that changed the retail, supply chain & logistics and the technology landscape (AWS). I believe that we are looking at the next wave of innovation to come from this company.

If it gets this one right, Amazon may just not be the World’s Largest Online Retailer. It may as well drop the Online from that tag line.

Content is King! Unless...

Content IS King!

There is no doubt about it. We are bombarded with information every waking minute of the day, thanks to a hyper connected lifestyle that has become the norm. In this barrage from electronic media, advertising, social networks, text messages and email communication we tend to subconsciously gravitate towards certain type of information more than others. Information that is rich in content. Content, more importantly original content stands out amongst the noise and gets our attention.

If your goal is to get your audience’s attention, it is obvious that you must focus on the content you provide.

Personalized Content Rules Supreme!

All content however are not created equal. What is appealing to me may not interest you and what motivates you to take action (buy a product, go to a restaurant or watch a movie) may have no impact on a third person. This adds another important dimension to the challenge of getting the content right – it isgetting the right content to the right person. In other words, Personalized Content.

Retailers are fast taking a lesson from this and implementing content driven commerce solutions to present customers with products and services that are relevant, interesting and timely. Personalized content based on a customer’s profile, buying preferences, location or just about any other attribute that can be directly or indirectly gleaned is being used to drive revenue.

And this is beginning to deliver results in increased total revenue, higher revenue per customer and greater customer retention.


There is however a wrinkle here (there is always a wrinkle). As great as personalized content is in delivering a positive result, improperly personalized content or irrelevant content has the opposite effect. If you are presented with the content is personalized for my tastes and buying behavior, chances are you would be less inclined to make a buying decision than if you were presented with generic (non-personalized or mass market) content.

Companies implementing personalized-content based commerce solutions now have the added responsibility of making sure that they get this personalization and the delivery of personalized content right, or pay the price for it.

eCommerce and content based commerce platform vendors are providing support for capturing a variety of customer data and building sophisticated rules based on this data to drive personalization and present original content. But who is making sure that these rules are built correctly and the personalized content is being presented correctly.

Testing such implementations is often done by existing manual testing processes that are grossly inadequate in their ability to simulate the variety of personalization options required to test these rules. What is scarier is the hubris that sometimes accompanies these implementations leading to the belief that they will always be done right and don’t require any additional or different methods of testing. We all know what such hubris can lead to.

Content Can Still Be King

There are some steps that can be taken to make sure that personalized content is built and delivered correctly. It involves developing a comprehensive testing strategy to test these implementations and putting in place a testing solution that can consistently run tests in a scalable manner using a representative test data set. Here are a few steps that you can focus on to get started in building such a strategy:

  • Test the User Journey – Follow the footsteps of your user on your site. This will identify issues relating to irrelevant content or incorrectly delivered content that your users may experience on your site
  • Automation – Implement a test automation solution that can help you easily build tests to verify that specific content is being delivered correctly. Since you will need to run the same tests for different personalization criteria and content, automating your tests will deliver huge payoffs
  • Test Data – Build a representative set of test data that covers the entire range of customer attributes used to drive personalization. Then map out the range of content that should be delivered for this personalization. Make sure that you test data handles both positive and negative tests. The latter is sometimes more important the former as it helps avoid presenting incorrect content.
  • Continuous Testing – Content driven commerce requires frequent content updates and also frequent changes to personalization rules. Make sure that you can tie to execution of your tests to these updates that you make. This continuous testing will help you identify issues and let you rectify them before they become a problem.

Do you use personalized content to drive customer engagement and revenue on your site? If so, how do you make sure that you deliver the right content to the right people? Please feel free to share your thoughts and experiences.

Dear Customer, Quality is YOUR problem too!

I usually tell this to my customers at the beginning of an engagement when we discuss product quality. After an initial silence, the response I usually get is, “What do you mean! I am paying you to make sure that my software is built right and just the way I want it. It’s your job to make sure that it works correctly. Why is it my problem!."

I then explain why it is their problem and what they can do about it!

Ensuring software quality has long been seen as the sole responsibility of the product development team - software developers, the QA team, infrastructure and network teams and the project management team. The product development team (whether it is in-house or outsourced) is tasked with understanding the customer’s needs and building a software product to meet these needs.

The challenge however has always been in understanding the customer’s needsand meeting those needs. An exercise that involves getting objective results (meeting those needs) to a subjective problem (understanding customer’s needs) has the odds stacked against it from the beginning. Meanwhile, Qualityas perceived by the customer is often a measure of how well those needs are met.

SDLC processes have been developed and have evolved over decades with the goal to help address this problem. Reducing subjectivity in defining customer needs has been a huge component of the ‘requirements definition’ phase of all SDLCs. Detailed requirements documents, functional and technical specifications, use cases, epics and stories are just a few of the tools that have been used to add objectivity to the customer’s needs. Agile methodologies have gone a step further and included the customer as a key participant in the development process in an effort to improve the understanding of customer’s needs.

While customers are willing to participate in helping with understanding needs, they also must be involved in determining what it means to meet these needs. Establishing Acceptance Criteria and techniques like Behavior Driven Development (BDD) and Test Driven Development (TDD) build rubrics, which can be used to drive tests to assess quality. These help, but don’t solve the problem completely.

The issue here is not with the process, but usually in the unwillingness or inability of the customer (or appropriate business stakeholder) in defining these rubrics with the level of detail that is actually meaningful in assessing quality. Another oft ignored component in this process is the test data. Using the right test data also goes a long way in establishing quality (meeting needs). This is an area where the customer’s participation is critical because they know their data the best. This is especially true for most current software systems where quality depends on a lot more than just source code. It depends on data, configurations, content, rules and usage patterns.

Development teams can do a lot to ensure that the software product they put out is of high quality, but they cannot do it alone. Customer Participation and more importantly Customer Accountability is critical to achieving high quality.

Have you talked to your customer’s about their role in software quality? How have these conversations gone? Would you like to share any of your stories?

A Walk in your Customer's Shoes

The Customer Journey on an eCommerce Site receives a lot of attention. Every site wants to understand what a customer wants or prefers and builds the site to suit that preference. The hope is that this will result in a purchase by the customer and an increase in revenue for the site.

Tools to develop customer journey maps and A/B testing tools to understand customer preferences are numerous. These help define the journey that the customer will make through the site and make sure that the path trodden is filled with objects of interest to the customer and devoid of experiences that will cause them to abandon the journey. They attempt to attract the right customer and interact with them in an appropriate manner to ensure that the customer will transact with the site.

In spite of doing all this, sites often find that they don’t see the results they expect.

So what are they missing?

They are missing the recognition that a customer’s journey includes a key aspect of any journey – the experience. Understanding the customer’s experience is important to ensuring a successful outcome of the journey, i.e., a purchase transaction.

There is an old saying: "You can’t really understand a person until you have walked a mile in their shoes”. If you really want to understand your customer’s experience on your eCommerce site, you just have to get in their shoes and make the journey through your Web site.

This journey will show you what the customer sees and experiences on your site. The personalized content that is presented, the navigation choices available, the promotions that are presented and the product choices that are suggested based on customer targeting rules or past purchases. The experience is also likely to vary based on the means (browser/device etc.) that the customer chooses to access your site. Given that the path taken by different customers through your site can vary significantly (in terms of content, personalization, access method etc.), so will their experience.

Understanding the variety of experiences on your site will help you close the loop between designing the customer journey and validating the impact of that journey on your customer to ensure a successful outcome. This understanding can be gained by testing the eCommerce site with a simulated user (synthetic testing) in a manner that captures the user journey through your site and your site’s response to that journey. Testing every possible variation (different content, personalization and navigation paths) of the customer journey and evaluating the response of the different systems/components that make up your site for all these variations requires what I call Deep Synthetic Testing. The depth in this testing is critical to gathering adequate and relevant data on the customer experience. Data that can then be worked upon by analytics and reporting tools to provide the necessary understanding of the customer experience and validate the journey.

What have you done to understand your customer’s experience on your site? What has worked for you and what has not? Would you be willing to share this?

The Wait is Over! Data Driven Testing is Here

We are extremely thrilled to announce the support for Data Driven Testing and Keyword Driven Testing in eureQa®.

You can now run your test scenarios in eureQa® against large sets of test data directly from the eureQa® Cloud. You don’t have to build any new tests. You can use the tests you currently have and enable them for Data Driven Testing.

You can do this without programming!

With this release we have also made a number of changes to the eureQa® workflows and UI with the goal of making it more user friendly and improving user productivity. You can learn more about the new features added on our Data Driven Testing page.

If you have not already signed up for eureQa®, sign up for a Free Trial today.