UK ecommerce platforms alone are losing £1.41 billion every year due to poor user experiences. Globally, it is estimated that companies lose 20 – 30% of their revenue due to substandard software.
The failures of the below marketplace startups show how badly things can go wrong if you don’t make quality your mantra.
Quality is therefore one of our key values for a (very) good reason. We build platforms for successful outcomes and longterm growth, which compels us to do things right from the start and on a continuous basis. Equally important, our clients generally make long-term commitments to us. It’s only right that in exchange we build products that not only endure, but thrive.
In isolation, bugs, regression, or not following the agreed upon solution, may not seem like much to an individual engineer. But these issues have a tendency to pile up, which WILL damage our relationship with clients. Even worse, they also damage the client’s relationship with their product’s end-users.
This negative domino effect continues internally as well. Sub-standard code quality prevents new engineers from joining the project, and is a stumbling block to rapid iteration. This makes it difficult for our clients to grow or improve their platforms.
Our philosophy is therefore that quality is the responsibility of everyone involved in the development process. That’s why we have detailed guidelines, including functional specifications, in place for each task and involve clients closely in the testing process.
Quality Assurance is built into our project flow, from design through functional and technical specifications to development phases. We therefore rely on a Test Driven Development (TDD) approach, which requires detailed test cases based on the specifications before any coding is done.
That means:
Our TDD approach forms part of a Continuous Integration/Continuous Delivery (CI/CD) process. Engineers push their code to the repository on a regular basis where it gets tested automatically. This allows us to immediately detect if there are any issues that could break the feature or application. If no issues are detected, the code gets integrated and deployed.
ESLint is a configurable linting tool that analyses JavaScript for errors, vulnerabilities, and stylistic issues to improve code quality.
PHP CodeSniffer is a tool that helps maintain coding standards and code quality in PHP, JavaScript, and CSS. It scans code for violations of predefined coding standards and can automatically fix issues.
CodeRabbit is an AI-powered code review platform that delivers real-time, context-aware feedback on pull requests, reducing the time and effort needed for manual code reviews. It learns from user input and improves its suggestions over time, adapting to your project’s evolving coding standards and practices.
CI/CD automation tools like GitHub Actions, GitLab CI/CD and Jenkins allow us to automate workflows like testing, building, and deploying code directly within repositories (GitHub & GitLab). Here’s a comparison of their respective advantages:
GitHub Actions | GitLab CI/CD | Jenkins |
Workflows can include tasks that run in parallel or sequentially. | Advanced deployment strategies, e.g. canary or blue-green deployment. | Highly customisable with plugins. |
Reusable or prebuilt scripts to automate tasks. | Advanced pipeline visualisation. | Supports complex workflows & integrations. |
Supports most languages. | Supports deep cloud integration (AWS, Google Cloud, Azure, Kubernetes) | Large ecosystem (1,800+ plugins) |
Matrix workflows can simultaneously test across multiple OS versions, programming languages, or dependencies. | Advanced security scanning (SAST, DAST), dependency scanning, and compliance checks directly in pipelines. | Free (self-hosted) |
Low maintenance | Docker-based by default. |
Xray is a test management tool that integrates with Jira, allowing teams to manage testing activities directly within Jira. It helps with test planning, execution, tracking, and reporting in Agile and DevOps environments.
Why we use Xray:
✅ Full Integration with Jira – Links test cases, defects, and user stories seamlessly.
✅ Supports Manual & Automated Testing – Works with JUnit, TestNG, Selenium, Cucumber, etc.
✅ Test Repository – Organises test cases into folders and reusable test sets.
✅ Test Execution & Reporting – Provides real-time insights into test coverage, pass/fail rates, and defects.
✅ Requirements Traceability – Ensures each requirement/user story has associated test cases.
✅ CI/CD Integration – Works with Jenkins, GitHub Actions, Bamboo, GitLab, and other CI/CD tools.
✅ REST API Support – Automates test case management using API calls.
Feature flags (aka feature toggles) support CI/CD by enabling or disabling specific features or functionalities within an application without requiring a deployment or code change, allowing teams to ship code more frequently.
New features or functionalities can be tested with a subset of users before rolling them out to everyone. Faulty features or functionalities can also be rolled back quickly if they cause any issues. We use tools like LaunchDarkly or Optimizely to implement feature flags.
Sentry helps engineers detect, diagnose, and fix application issues via real-time performance monitoring (e.g. slow API requests, database queries, and bottlenecks) and code-level error tracking (e.g. stack traces, error logs, and source maps) of both front-and backend code.
It supports multiple languages, including JavaScript, Python, Java, Node.js, and PHP. Other advantages are Automatic Issue Grouping and User Impact Analysis, which reduce noise and prioritise critical issues.
We use Cloudwatch to define and monitor application-specific metrics (e.g. user activity and error rates) for real-time observability of AWS infrastructure and apps. This provides us with actionable insights for performance optimisation, troubleshooting, and overall system health. Other useful features are automated alerts, automatic scaling, and real-time log analysis.
Symfony Profiling is a built-in debugging and performance monitoring feature in Symfony that helps developers analyse and optimise PHP applications. It is powered by the Symfony Profiler and Web Debug Toolbar, which collect detailed information about requests, database queries, HTTP headers, execution time, and memory usage.
Database audits are built into all our projects. They allow us to track and log database activities to ensure security (e.g. detect unauthorized access, SQL injection attempts), integrity (e.g. critical data is not altered or deleted improperly), and performance (e.g. slow database queries). It helps engineers monitor who accessed the database, what changes were made, and when they occurred.
PerimeterX is a behaviour-based bot management solution that protects websites, mobile applications and APIs from automated attacks.
We use functional tests to verify that features and modules work according to their requirements and specifications. They typically simulate user inputs (e.g. button clicks, form submissions) to check that the software behaves as expected from the user’s perspective.
Types of functional testing include:
🔬 Unit Testing – individual components or functions work properly
🔬 Integration Testing – interactions between integrated components or systems work as expected
🔬 Regression testing – ensures recent code changes, such as bug fixes, enhancements, or new features, do not negatively affect the existing functionality of an application.
🔬 System Testing – all the parts in an entire application work together as expected
🔬 User Acceptance Testing – real-world scenarios to ensure application meets the business requirements
The goal of performance testing is to identify performance bottlenecks and ensure the application can handle real-world usage efficiently, without degrading user experience.
We use different types of performance testing to assess various aspects of an application’s performance such as speed, stability, scalability, and efficiency.
Load Testing: | Can the system handle the expected volume of traffic or requests? |
Stress Testing: | Observe how the system behaves under extreme conditions. The goal is to find the breaking point, identify failure modes, and test recovery strategies. |
Scalability Testing: | Evaluates how well the system scales when more users or transactions are added. Determines the system’s capacity limits and how additional resources (e.g., hardware, infrastructure) affect performance. |
Spike Testing: | Measures the system’s reaction to a sudden, sharp increase in load. Determines how well the system can handle rapid traffic surges, such as during a flash sale or viral event. |
Volume Testing: | System’s ability to handle a large volume of data, typically used for database or data-driven applications to assess performance with large data sets. |
Latency Testing: | Measures the delay in data transfer between the client and the server. Low latency is essential for real-time applications like online gaming or video conferencing. |
Our engineers use AI tools like Copilot for automated code generation, testing, and debugging. It is important to note that the effectiveness of AI tools depends on the quality of the prompts they are fed. We make sure this is the case by basing prompts on our extremely detailed specification documents.
Each project phase has built-in steps that support our QA process.
Design phase:
Internal and client design demos are iterated until validation.
Functional specifications:
Business goals and user flows are evaluated for feasibility and impact.
Technical requirements:
Elements such as database schema, API endpoint list, tech stack & architecture, libraries and integration flow are reviewed until validation.
Development phase:
Our definition of done for tickets includes three steps: engineers test every ticket locally and on the UAT server when possible; tickets are then assigned to the QA; when pushing code breaks something or impacts other engineers’ work it is immediately discussed.
Internal demos: need to be user-centric, collaborative, well-prepared, and succinct. View an example of an internal demo.
Client demos:
Typically done at the end of an epic to demonstrate a full feature or a component thereof. As part of our commitment to continuous improvement all demos are documented and reviewed. Participants each list what they thought were done well and what can be improved on.
When we present a demo to the client we want them to get exactly what the designer showcased. It’s the engineer’s job to make sure that each page matches the original design. For example, responsiveness needs to be tested on a mobile device emulator:
Testing by client:
We provide clients with a UAT environment including automated feedback mechanisms powered by tools like Bugherd and Usersnap.
Our expectations are documented as essential coding guidelines and standards to ensure consistency, readability, and maintainability. Adhering to these rules promotes collaboration and simplifies code reviews, debugging, and future development. Guidelines include SOLID principles, naming conventions, repository patterns, and entity managers.
On a general level we adhere to the tenet: always leave code better than you found it. Whenever you touch any part of the code, make sure to go beyond your simple ticket and work as a team to clean the project. This includes:
The tech lead ensures that test coverage is good and unit tests are meaningful. They also ensure that tests are launched and passed before each PROD deployment. Lastly, they validate the QA engineer’s functional tests.
QA engineers are the lead advocates for software and system integrity across all our projects. Their main goal is to help our teams mitigate risk and deliver best-in-class solutions via manual and automated software testing protocols and tools.
QA engineers work closely with engineering teams during sprints to:
Our core value proposition for clients is that we offer a much higher success rate than other software agencies or turnkey SaaS platforms. Help us keep it that way by sharing our obsession with quality in every aspect of our work.
CobbleWeb helps early-stage entrepreneurs, tech startups and growing companies to conceptualise, design, build, improve, and launch successful online marketplaces.
Our custom user-focused approach to marketplace development increases our clients’ opportunities for success.
CobbleWeb has helped more than 30 startups and established companies design, build, test, and improve high-growth online marketplaces.