How to create a slideshow of your shop

collateral usage of CI Tools

RIGHT
Welcome 1st time titans and speaker WHAT is this talk about: I will show you how you can achieve a technological foresight on your complete project just by testing it - and optimize the testing workflow step by step

Andreas Mautz

  • first program in 1996
  • first magento shop in 2008
  • 3 years of QA in big projects before
  • CTO @ webvisum
  • FireGento Board Member
  • @mautz_et_tong
BOTTOM
First things first: I am not a native speaker, so if anything is unclear because of "words" interrupt
me. Otherwise I please you to ask all topic related stuff at the end.
Why am I supposed to speak about testing (Spoiler alert: yes, this is a talk about testing, I have seen the slides before)?
I do Magento since 10 years for a living and own a small agency in cologne.
I have learned to test stuff the hard way, multiple times.

Agenda:

  1. Start testing
  2. Optimize testing
  3. Analyse and react
RIGHT
I will show you three stages to that goal to make a slideshow out of your shop
First: If you do not test, start testing now, it is easy.
Second: If you are already testing, I will show you some tricks to optimize testing circles
Third: I can show you, how you can automatically analyse your test results and how you should react
on testing errors.
Bonus: I will show, how you cann add intelligence and analysis to you test results, so you have some kind of foresight platform

Why should I run tests?

  • Documentation! (Cover your ass)
  • Running Tests just makes you sleep better
  • Browser Cross Testing (Comparing Images)
  • Saving time and money
  • Add any other argument pro testing
Why do I do testing?
- intensify testing
- save money ( the later a bug is found, the more expensive it will be to fix it)
- code better
- Sleep better
- Cover my ass (documentation, decision-making basis)

Start testing!

*it is really easy

BOTTOM

Testing Types

Functional Testing

  • Unit testing*
  • Integration testing
  • System testing
  • Sanity testing*
  • Smoke testing
  • Interface testing
  • Regression testing
  • Beta/Acceptance testing*
BOTTOM
What Types of testing are functional
I differentiate between Functional Testing and Non-Functional Testing

Testing Types

Non-Functional Testing

  • Performance Testing*
  • Load testing
  • Stress testing*
  • Volume testing
  • Security testing*
  • Compatibility testing
  • Install testing*
  • Recovery testing
  • Reliability testing
  • Usability testing
  • Compliance testing
  • Localization testing*
BOTTOM
So, What Types of testing are non-functional?
Not everything is sharply defined here in this short list.
Talking about Install Testing: is complete Deployment and acceptance testing on a brand new instance
install
testing?

Disclaimer

This Talk is not:

  • hands on
  • Setup your local Environment for testing
  • Static Testing / Analysing your code
  • Perfectly Written Analytics
BOTTOM
Talking about Test driven Development:
Try to start with PHPUnit locally in small steps and push it forward as a principle to your project
Try to analyse your code on a commit / feature basis locally

Tools for starting analysing and testing code

  • PHPUnit
  • PHPStan - PHP Static Analysis Tool, Psalm, Phan
  • dePHPend (PHP Depend)
  • PHPMD - PHP Mess Detector*
  • PHPCPD - PHP Copy/Paste Detector
  • PHP Coding Standards Fixer (PHP CodeSniffer)
BOTTOM
Just to name them
PHP CodeSniffer in brackets because of CS Fixer
PHPMD with asteriks because auf PHPStan/PSalm/Phan (together)
PHP Depend in brackets Sebastian Bergman says, dePHPend is the new shit, we are on PHP Depend jet

Where is my slideshow?

BOTTOM
So,back to the topic. I mean a step further to it

Codeception

Acceptance Testing

BOTTOM
For me, automated acceptance testing is the final step in the approval process of changing code
Codeception is a testing Framework written in PHP. You can run PHPUnit tests, API testing and much more.
Setup is really easy
Here we are focused on the browser testing as part of acceptance testing. And yes, codecption makes
images, which will lead us to our gallery.

Codeception: Sample Test

                    
$I = new AcceptanceTester($scenario);
$I->wantTo('ensure that the swag is accessible');
$I->amOnPage('/');
$I->click("Colored Shapes Unisex Black Shirt");
$I->selectOption('Size', 'XL');
$I->click('#product-addtocart-button');
$I->see("You added Colored Shapes Unisex Black Shirt to your shopping cart.");
$I->amOnPage('/magento-10-year-anniversary-t-shirt-config.html');
$I->selectOption('Size', 'L');
$I->click('#product-addtocart-button');
$I->see("You added Magento 10 Year Anniversary T Shirt to your shopping cart.");
                    
                
Let us jump in into a small sample of an e-commerce acceptance test. We want to put 2 Shirts from
the magento swag store in to our cart.
I spare you at this point the output of the run command. But here are two types of results.

Overview

Summary and steps as human clickable website
file:///Users/amautz/www/mmde18-sample/tests/_output/report.html

Slideshow

Each test step is a screenshot
And yes, this is configurable
just disable the normal recorder feature and use "$I->makeScreenshot("Good Name for a Screenshot");"
file:///Users/amautz/www/mmde18-sample/tests/_output/record_5be453094c84e_TestFailCept__Ensure_that_the_swag_is_accessible/index.html
So, believe me, it is really easy to create a screenshot while testing as a result
It should't take longer then 1 hour to get your first acceptance test with codeception
The techniques here are Codeception, a Selenium hub running some Browsers - that is all.
You can do it locally, on your own infrastructure somewhere or youse a paid service for this like
browserstack

Tricks on Testing!

Or: Just some lessons learned from testing heavily

BOTTOM

But first: What do we have so far?

  • tons of testing tools
  • a huge amount of ToDos to built them all in
  • a fucked up week long running pipeline if you run everything
We have ton of testing tools.
If they were all executed one after the other in their entirety, any deployment would become
hell in the waiting room.
So here are some useful tips to not getting paid as a coder for awaiting
BOTTOM

Lesson 1: Use your sleeping time

Test on Nightly / Weekly builds

BOTTOM
At night, you have time, so use it. As long as you don't work around the clock, nights are the
perfect time for running long test suites.
If you constantly test on a higher getting level, it is fair enough to test the complete setup from
time to time. Before you plan to release a relaunch / big features and so on
If your pipeline fit in your slepping time, run it.

Lesson 2: Hail to the pareto principle

Don't be stupid, be efficent

BOTTOM
If your clients customers use chrome in 80%, stick to this single browser for acceptance testing
on a daily business and do the others on weekends
Tap the analytics API for devices, browser versions and display resolutions
Test the framework (Magento 2) if you have to on a regular base out of your project and just test
your customized or 3rd party code
Divide and conquer
Testing the most important 20% of your shop should you make 80% sure everything is OK. Should be fair enough for daily business

Lesson 3: Parallelize all the things

If you can't multitask: there is a cloud that can do this

BOTTOM
For acceptance testing: Use services like Browserstack if you have the money, or setup a selenium
cluster
Run different test framework at the same time on containers locally or remote
For example: we have a 25 EUR / month bare metal server who can run nearly 10.000 test minutes runs
per night, 16 runs at once.
On browserstack this would cost you around 1000 euros / month
Of course, browserstack has some nice features. but our 25 euro treadmill fits in the pareto
principle

Lesson 4: Focus on the ongoing project

Make your project better in quality without hazing your developers

BOTTOM
Start small and add diversity slowly
Don't block your developers on getting issues out of their way just by adding useless tests
success boundaries of new testing barriers in your pipeline should come from bottom to top or as a
consent with the developers
Talk to each other and document the progress and the new rules accordingly and regularly (like you
do with issues)

Lesson 5: Gaining speed in the process is fun

Time is money and gaining speed in the process is fun

BOTTOM
Some test frameworks have a random option. Use it to randomly run tests on a daily business.
Develop a level system on TestCases.
Control complicated or hidden stuff more often then the showy ones
If you need to perform manual tests, e.g. usability tests, do so as rarely as possible.
Sensitize the manual tester for small details and keep them informed about the changes made.
Already developed features which are working but vital for the system can be tested once before the
release.
Believe me again: You save a lot of time if you don't test every single payment and shipment method
on every commit

Lesson 6: Compress your results

There is no fun in manually evaluating every test result from different test types to get a health status.

And it helps if you can put the current status in relation to the past

BOTTOM
Use your CI Toolbox to break obvious things automatically.
This can be done for bad results on static tests
git hooks on commit message quality or stopwords in code
But should you stop a deployment when your release version just have a sitespeed test result of 80?
Is your 80 better or worse then live?
Do you know it?
You need an instance where information is gathered and where knowledge is made comparable over versions

That is enough lessons learned for 20 minutes talk

But these were only the two necessary steps to get to the actual topic

RIGHT

Last step: Slideshow, Baby!

aka: use your deployment pipleline to gather insights and not just built and copy files

BOTTOM
So, you have an automated or half-automated system which is doing your deployment for you.
Most of the time it just create files and copies them to another location.
You get bonus points if there are already tests running in your pipline.
So, what else can you do with an automated system doing jobs?

Project Overview:

Real Project Overview

BI / KPIs

BOTTOM
Orders / hours / release
Compared to last version / live
Built deltas for automation (∆ = 0 no progress, > 0 worse, < 0 progress)

Performance

BOTTOM
Siege
Apache Benchmark: ab -c 20 -n 200 https://www.magetitans.com/
docker run --shm-size=1g --rm -v "$(pwd)":/sitespeed.io sitespeedio/sitespeed.io https://www.magetitans.com/
Google API: GET https://www.googleapis.com/pagespeedonline/v4/runPagespeed?url=https%3A%2F%2Fwww.magetitans.com%2F&locale=en&screenshot=true&snapshots=false&strategy=desktop&key={YOUR_API_KEY}
mobile vs Desktop BOTTOM

Code Quality

BOTTOM
Combined Output of your static analysis tools

Logfiles

BOTTOM
Fever, Patient, be sensitized of the Application and the general well-being of it Logfiles (size, exeption per hours) If your log files are big, its not good.

Database Status

BOTTOM
InnoDB Buffer Size according to the RAM

Slideshows from the Test Results

BOTTOM
FINALLY: you can click
Useful for customer journeys
mobile / desktop comparison in UX
Actually, we just give this once in a while to our clients and let them review their page design on this "real shit"

Conclusions:

  • Intensify your testing game
  • Save time and money
  • Uprise code quality
  • Take care of your coders
  • Make a dashboard
  • Use it for Feedback

Bonus slide: Feedback

BOTTOM
talk to your clients, but test yourself in front
deliver information without force
gather useful insights
throw away insights you don't need - time = money || ( sleep || quality_time || food )
Cover your ass
React on test errors ( snooze is not an option ) -> Fieberpatient, der öfter fieber hat, aber wegen
den Zähnen beim Notarzt ist
automate where possible but have a look on details
FINISH

Questions?

Sources:

Reminder: IMPORTANT!!! ToDo before the conference.

Thank you!

ENDE