Automated testing for I2C sensor driver module #17152
DavesCodeMusings
started this conversation in
Show and tell
Replies: 2 comments
-
Automated testing is the way. Your approach seems to be simple and pragmatic. Thanks for documenting! |
Beta Was this translation helpful? Give feedback.
0 replies
-
Testing are always good. We test our training devices (PLCs) manually as far as possible. I have automated tests for a large order (96 devices). This gave me two advantages:
With 32 digital inputs/outputs and 4 analog inputs/outputs, you can get it wrong if you test manually. In the software world, testing has another advantage:
Sometimes errors slip through the tests and then the tests simply have to be improved. |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
TL;DR... https://github.com/DavesCodeMusings/SHT3x-MicroPython
It's a Sensiron SHT3x I2C driver. I know there are modules out there already and I mostly did this for the learning experience. The part I'm keen to show off is the automated testing used with this module. I don't know how many folks are using automated testing with their code, but I thought I'd share for those interested.
My testing up to this point had been wiring an actual SHT30 to an ESP32 board and running
main.py
to print temperature and humidity values. If it looked reasonable, I called it good.Now I have
run_tests.py
that executes to ensure all the class functions are working correctly.The test script is designed to run in CPython so I can automate the tests on a GitHub runner whenever I push code to the repository. And the GitHub runner has no concept of SHT3x sensors,
machine.SoftI2C()
, ormachine.Pin()
, or evenmicropython.const()
. So I made some mock functions to act like an SHT30 responding to I2C commands. Most of this is in my machine.py script.My
run_tests.py
is not fancy at all. There's no testing framework. It simply usesassert()
to throw an exception when a test doesn't pass. This is enough to stop the pipeline job running on the GitHub runner and flag the build as unsuccessful. Other tests rely on the module itself throwing the exception to fail the test.I'm not an automated testing expert by any stretch of the imagination, so I'm open to suggestions for improvement. Though as it stands, I think it's far better than my old method of running it on an ESP32 and calling it good. And, it runs automatically with each code push, so no manual effort from me.
Beta Was this translation helpful? Give feedback.
All reactions