Loading...
Answers
MenuBuilding a database of A/B testing examples
Answers
Yes definitely. As you know these elements are super important pertaining to to this information. There are plethora of small businesses that cannot the technical expertise needed in some cases to increase in engagements on websites, which for most websites are the best promotion tools. This would be a great asset for small to medium size business who want to increase the proficiency of engagements for one. Some other ideas I could think of too. I would explore those engagements with those users to see if there is an noticeable difference before and after using the information provided from your db.
Great idea - I'd be interested, especially in pricing A/B test results. Are you up and running already?
There are already a couple of products in market like this, goodui.org being the main one.
However, personally, I don't believe that this kind of meta-analysis of experiments from different sites has too much value, for a couple of reasons. I'm only really outlining this stuff for you because it might be food for thought if you want to create products for the experimentation community:
Whilst the theory of meta-analysis is technically sound, it is also fraught with issues relating to the complexity of web experiences:
Tests must be subjectively categorized and then analysed as 'patterns' when there is zero evidence that that thing was the cause of the result. For example, you might think a test won because you removed the voucher code box, but it equally could have won because of the change in page layout.
No two tests are in any way the same in any controlled way. Meta analysis mainly comes from sciences where experiments can be repeated under (relatively) identical controlled conditions. This is impossible and A/B testing cannot really be compared to that kind of controlled experimentation. By comparing two different experiments you instantly remove any control you had around audiences etc because they will not be the same.
The greatest benefit to be had from experimentation is by integrating it with business and brand strategy. A good business strategy is aimed at carving out a unique and differentiated direction of travel for that brand. The opportunity with experimentation is to iteratively develop and innovate the business through learning and adaptation. This means that experimentation should aim at being meaningful within the context of the brand strategy. Meta-analysis of experiments from different sites is, in many ways, the opposite of this because it seeks to make the ‘lowest common denominator’ changes which would be beneficial to any business anywhere. Conversion uplift and revenue might be achieved but what has been furthered in terms of strategy and differentiation? This is not to say that it isn’t valuable if you simply want to do ‘CRO’ but my argument is that nobody should merely do CRO.
Related Questions
-
Does anyone know of a good SaaS financial projection template for excel/apple numbers?
Here is a link to a basic model - http://monetizepros.com/tools/template-library/subscription-revenue-model-spreadsheet/ Depending on the purpose of the model you could get much much more elaborate or simpler. This base model will help you to understand size of the prize. But if you want to develop an end to end profitability model (Revenue, Gross Margin, Selling & General Administrative Costs, Taxes) I would suggest working with financial analyst. You biggest drivers (inputs) on a SaaS model will be CAC (Customer Acquisition Cost, Average Selling Price / Monthly Plan Cost, Customer Churn(How many people cancel their plans month to month), & Cost to serve If you can nail down them with solid backup data on your assumption that will make thing a lot simpler. Let me know if you need any help. I spent 7 years at a Fortune 100 company as a Sr. Financial Analyst.BD
-
What is the point of having multi-year contracts in SAAS if the customer does not pay upfront for the 2nd year?
If you have an enforceable contract, the client is obligated to pay for the services received. As a business owner, I would be very concerned if a SAAS was demanding upfront payment for 2 years.SN
-
How important is a polished/pretty prototype to pitch an idea to potential customers and/or investors? I'm using this as a means to support my pitch.
It depends how clear your idea value is without it, or with a rough one, or mock ups vs polished. There are a few goals you are trying to conquer with a prototype, overall its about concept clarity and valuation. 1. customer/investor acceptance: "i want that" 2. customer feedback: "you should change that" 3. investor valuation: "wow, you are that far along" vs "its just an idea" Without more details only you know what it will take to get the right clarity to the customer and investors to answers on above.BS
-
Freemium v.s. free trial for a marketplace?
It depends on a number of factors but I'd boil it down to two key things to start: 1) What is your real cost to provide a free plan or trial? 2) Who exactly is your customer and what are they used to paying and who and how do they pay today? When you say "online workforce marketplace" it sounds as though you're placing virtual workers. If that's the case, or if you're paying for the supply side of the marketplace, the question is how much can you subsidize demand? Depending on where you're at in the process, I'd also question how much you can learn about the viability of your marketplace by offering a free version, assuming again, that free is actually a real cost to you. I was part of a SaaS project that started charging people for early access based mostly on just a good landing page (we clearly stated they were pre-paying) and were amazed at the response. I've also run a SaaS product that offered free trials and realized that the support costs and hand-holding and selling required to convert from free trial to paid wasn't worth it, this despite the product's significant average ARR. You might be better off providing a "more information" sign-up form (to capture more leads) and let them ask for a free trial while only showing your paid options. I've been amazed at the lead capture potential from a simple "have questions? Click here and we'll contact you" This is all the generalized advice I can offer based on the limited information I have, but happy to dive-in further if you'd like on a call.TW
-
How can I manage my developers' performance if I don't understand IT?
Whenever you assign them a task, break down the task into small chunks. Make the chunks as small as you can (within reason, and to the extent that your knowledge allows), and tell your devs that if any chunks seem large, that they should further break those chunks down into bite size pieces. For instance, for the overall task of making a new webpage, _you_ might break it down as follows: 1) Set up a database 2) Make a form that takes user email, name, and phone number and adds them to database 3) Have our site send an email to everyone above the age of 50 each week When your devs take a look at it, _they_ might further break down the third step into: A) Set up an email service B) Connect it to the client database C) Figure out how to query the database for certain users D) Have it send emails to users over 50 You can keep using Asana, or you could use something like Trello which might make more sense for a small company, and might be easier to understand and track by yourself. In Trello you'd set up 4 columns titled, "To Do", "Doing", "Ready for Review", "Approved" (or combine the last two into "Done") You might want to tell them to only have tasks in the "Doing" column if they/re actually sitting at their desk working on it. For instance: not to leave a task in "Doing" overnight after work. That way you can actually see what they're working on and how long it takes, but that might be overly micro-manager-y At the end of each day / week when you review the tasks completed, look for ones that took a longer time than average (since, on average, all the tasks should be broken down into sub-tasks of approximately the same difficulty). Ask them about those tasks and why they took longer to do. It may be because they neglected to further break it down into chunks as you had asked (in which case you ask them to do that next time), or it may be that some unexpected snag came up, or it may be a hard task that can't be further broken down. In any case, listen to their explanation and you should be able to tell if it sounds reasonable, and if it sounds fishy, google the problem they say they encountered. You'll be able to get a better feel of their work ethic and honesty by how they answer the question, without worrying as much about what their actual words are. Make sure that when you ask for more details about why a task took longer, you don't do it in a probing way. Make sure they understand that you're doing it for your own learning and to help predict and properly plan future timelines.LV
the startups.com platform
Copyright © 2025 Startups.com. All rights reserved.