Hello everyone,
today I want to present the test system for Cantor's worksheet.
The worksheet is the most central, prominent and important part of the application where the most work is done.
So, it is important to cover this part with enough tests to ensure the quality and stability of this component in future.
At the moment, this system contains only ten tests and all of them cover the functionality for the import of Jupyter notebooks only that was added recently to Cantor (I have mentioned them in my first post).
However, this test infrastructure is of generic nature and can easily be used for testing Cantor's own Cantor files, too.
The test system checks that a worksheet/notebook file is loaded successfully, tests the backend type and validates the overall worksheet structure and the content of its entries.
Actually, some content is not validated, for example the image content. This would increase the complexity of the tests and slow down their execution without additional big value with respect to the quality assurance.
This new infrastructure has proven to be helpful already. When writing the first tests for the worksheet I have found couple of bugs in the implementation of the import of Jupyter notebooks. After having fixed them and now, having this additional barriers, I'm more confident about the implementation and can say more surely that the import of Jupyter notebooks works fine.
In previous post I have mentioned some issues with the perfromance of the renderer used for mathematical expressions in Cantor. It turned out this problem is not so easy to solve as I assumed first. But now, after having finished a substantial part of the work that was planned to be done as part of this GSoC project, I can give more attention to to remaining problems, including this one with the performance of the renderer.
In the next post I plan to show a better realization of the math renderer in Cantor.
today I want to present the test system for Cantor's worksheet.
The worksheet is the most central, prominent and important part of the application where the most work is done.
So, it is important to cover this part with enough tests to ensure the quality and stability of this component in future.
At the moment, this system contains only ten tests and all of them cover the functionality for the import of Jupyter notebooks only that was added recently to Cantor (I have mentioned them in my first post).
However, this test infrastructure is of generic nature and can easily be used for testing Cantor's own Cantor files, too.
The test system checks that a worksheet/notebook file is loaded successfully, tests the backend type and validates the overall worksheet structure and the content of its entries.
Actually, some content is not validated, for example the image content. This would increase the complexity of the tests and slow down their execution without additional big value with respect to the quality assurance.
This new infrastructure has proven to be helpful already. When writing the first tests for the worksheet I have found couple of bugs in the implementation of the import of Jupyter notebooks. After having fixed them and now, having this additional barriers, I'm more confident about the implementation and can say more surely that the import of Jupyter notebooks works fine.
In previous post I have mentioned some issues with the perfromance of the renderer used for mathematical expressions in Cantor. It turned out this problem is not so easy to solve as I assumed first. But now, after having finished a substantial part of the work that was planned to be done as part of this GSoC project, I can give more attention to to remaining problems, including this one with the performance of the renderer.
In the next post I plan to show a better realization of the math renderer in Cantor.
No comments:
Post a Comment