add english translation

This commit is contained in:
yaozhicheng 2024-06-21 19:48:25 +08:00
parent c1c739256f
commit 5335f43ed1
3 changed files with 388 additions and 5 deletions

View File

@ -4,7 +4,7 @@
This project aims to explore open-sourced subdivision verification of the high-performance open-source RISC-V OpenXiangshan processor's microarchitecture. It introduces new tools and methods based on Python, enabling all students interested in chip design and verification to quickly grasp and study the XiangShan microarchitecture. This phase provides a detailed introduction to the principles and implementation of the branch prediction module of the XiangShan Kunminghu architecture, along with the corresponding open-source verification environment. Participants in this phase can earn points and rewards by submitting bugs, writing verification reports, and more.
**The Open Verification Porject website[open-verify.cc](https://open-verify.cc)**
**The Open Verification Porject website[open-verify.cc](https://open-verify.cc/en/)**
## Introduction
@ -21,13 +21,13 @@ Chip validation is a crucial aspect of chip design work. Skipping or insufficien
## Learning resources
1. [Basic Learning Materials](https://open-verify.cc/mlvp/docs/): Learn about chip validation and how to use Python for validation.
1. [Basic Learning Materials](https://open-verify.cc/mlvp/en/docs/): Learn about chip validation and how to use Python for validation.
1. [Introduction to Shanhai BPU](https://open-verify.cc/xs-bpu/docs/): Learn about branch prediction and the basic predictors used in the Shanhai processor.
1. [Introduction to Shanhai BPU](https://open-verify.cc/xs-bpu/en/docs/): Learn about branch prediction and the basic predictors used in the Shanhai processor.
1. [How to Participate in This Activity](/doc/join_cn.md): Learn how to participate in this activity and the rules.
1. [How to Participate in This Activity](/doc/join_en.md): Learn how to participate in this activity and the rules.
1. [Building Verification Environment](/doc/env_cn.md): Learn how to set up the basic verification environment, how to validate, and submit validation results.
1. [Building Verification Environment](/doc/env_en.md): Learn how to set up the basic verification environment, how to validate, and submit validation results.
**<font color="blue">To accelerate the verification process, the verification environment has provided the following reusable features:</font>**
<font color="blue">

231
doc/env_en.md Normal file
View File

@ -0,0 +1,231 @@
# BPU Verification Environment
This environment provides all dependencies and toolkits required for BPU verification. This verification environment needs to run under the Linux system and includes the following components:
1. Generate the Python DUT module to be verified
2. Example project for DUT verification
3. Component to generate verification report
Project to be verified:
- Module to be verified: [XiangShan 545d7b](https://github.com/OpenXiangShan/XiangShan/tree/545d7be08861a078dc54ccc114bf1792e894ab54)
## Install Dependencies
In addition to the basic gcc/python3 development environment, this repository also depends on the following two projects. Please install them first, **and install the dependencies of the corresponding projects**.
1. [picker](https://github.com/XS-MLVP/picker)
2. [mlvp](https://github.com/XS-MLVP/mlvp)
Then install other dependencies through the following command:
```bash
apt install lcov # genhtml
pip install pytest-sugar pytest-rerunfailures pytest-xdist pytest-assume pytest-html # pytest
```
## Generate Module to be Verified
Download the repository
```bash
git clone https://github.com/XS-MLVP/env-xs-ov-00-bpu.git
cd env-xs-ov-00-bpu
```
### Generate uFTB
```bash
make uftb TL=python
```
The above command will generate an out directory in the current directory, and the UT_FauFTB in the picker_out_uFTB directory is the Python Module to be verified. It can be directly imported in the python environment. Because the Python DUT to be verified is related to the python version, a universal version of python-dut cannot be provided, and it needs to be compiled by yourself.
```bash
out
`-- picker_out_uFTB
`-- UT_FauFTB
|-- _UT_FauFTB.so
|-- __init__.py
|-- libDPIFauFTB.a
|-- libUTFauFTB.so
|-- libUT_FauFTB.py
|-- uFTB.fst.hier
`-- xspcomm
|-- __init__.py
|-- __pycache__
| |-- __init__.cpython-38.pyc
| `-- pyxspcomm.cpython-38.pyc
|-- _pyxspcomm.so -> _pyxspcomm.so.0.0.1
|-- _pyxspcomm.so.0.0.1
|-- info.py
`-- pyxspcomm.py
4 directories, 13 files
```
After importing the UT_FauFTB module, you can perform simple tests in the Python environment.
```python
from UT_FauFTB import *
if __name__ == "__main__":
# Create DUT
uftb = DUTFauFTB()
# Init DUT with clock pin name
uftb.init_clock("clock")
# Your testcases here
# ...
# Destroy DUT
utb.finalize()
```
Other modules to be verified, such as TAGE-SC, FTB can also be generated by similar commands.
**Supported module names are: uftb, tage_sc, ftb, ras, ittage. You can also generate all DUT modules at once with the following command.**
```bash
make all TL=python
```
## BPU Peripheral Environment
BPU is a module in the CPU. This environment provides the peripheral environment required for it to drive the BPU to execute trace data (**when verifying, you can choose whether to use it according to the actual situation**).
### Branch Trace Tool: BRTParser
BRTParser is a tool we specifically designed for BPU verification that can automatically capture and parse branch information in the program instruction stream. It is based on the Xiangshan frontend development tool `OracleBP`. BRTParser integrates the NEMU simulator internally, can directly run programs, and capture branch information in it. BRTParser will parse the captured branch information into a universal format, which is convenient for subsequent verification work.
Please refer to `BRTParser` in the `utils` directory for details.
### FTQ Running Environment
Since a single sub-predictor module cannot run real programs, it is even more impossible to verify its prediction accuracy and functional correctness in actual programs. Therefore, we provide a simple FTQ environment. This environment uses the branch information generated by BRTParser to generate the program instruction execution stream. FTQ will parse the predictor's prediction results and compare them with the actual branch information to verify the accuracy of the predictor. In addition, FTQ will also issue redirection information and update information to the BPU, so that the predictor can run continuously in the FTQ environment.
In order for a sub-predictor to work normally, we also simulated the BPU top-level module to provide timing control and other functions for the sub-predictor. For non-FTB type sub-predictors, we also provide a simple FTB implementation, which is used to add FTB basic prediction result information to the sub-predictor result.
Currently, we use the FTQ environment to drive the uFTB sub-predictor and have written a timing-accurate uFTB reference model. The specific implementation and usage of the FTQ environment can be obtained in this test case, see `test_src/uFTB-with-ftq` for details.
## Write Test Cases
Participants in the verification need to write test cases to verify the functional correctness of the BPU sub-module. In this repository, all test cases need to be placed in the `tests` directory.
We provide a test case running framework based on pytest, which can easily write test cases, define function coverage, generate test reports, etc. Therefore, when writing test cases, you need to follow some specifications introduced in this section.
### Running Tests
We have provided two basic test cases for uFTB, each test case is placed in a separate subdirectory under the [`tests`](command:_github.copilot.openRelativePath?%5B%7B%22scheme%22%3A%22file%22%2C%22authority%22%3A%22%22%2C%22path%22%3A%22%2Fhome%2Fyaozhicheng%2Fworkspace%2Fenv-xs-ov-00-bpu%2Ftests%22%2C%22query%22%3A%22%22%2C%22fragment%22%3A%22%22%7D%5D "/home/yaozhicheng/workspace/env-xs-ov-00-bpu/tests") directory, and the subdirectory name is the name of the test case. Before running these two test cases, please ensure that the uFTB module has been correctly compiled and the dependencies required for the test case have been installed.
Afterwards, you can run the corresponding test cases. For example, to run the `uFTB_raw` test case, just run the following command in the [`tests`](command:_github.copilot.openRelativePath?%5B%7B%22scheme%22%3A%22file%22%2C%22authority%22%3A%22%22%2C%22path%22%3A%22%2Fhome%2Fyaozhicheng%2Fworkspace%2Fenv-xs-ov-00-bpu%2Ftests%22%2C%22query%22%3A%22%22%2C%22fragment%22%3A%22%22%7D%5D "/home/yaozhicheng/workspace/env-xs-ov-00-bpu/tests") directory:
```bash
make TEST=uFTB_raw run
```
This command will automatically run the `uFTB_raw` test case and generate waveform, coverage, and test report information. The test report will be saved in the `tests/report` directory. You can open `tests/report/report.html` in your browser to view the content of this test report. The test report style is shown in the following figure, and other files will also be generated in the `tests` directory.
<div style="text-align: center;">
<img src="/.github/image/test-report.png" width="700">
</div>
If you need to run all test cases at once, you can run the following command:
```bash
make run
```
The generated test report will include the test results of all test cases.
### Adding Test Cases
When writing your own test cases, you only need to create a new subdirectory under the [`tests`](command:_github.copilot.openRelativePath?%5B%7B%22scheme%22%3A%22file%22%2C%22authority%22%3A%22%22%2C%22path%22%3A%22%2Fhome%2Fyaozhicheng%2Fworkspace%2Fenv-xs-ov-00-bpu%2Ftests%22%2C%22query%22%3A%22%22%2C%22fragment%22%3A%22%22%7D%5D "/home/yaozhicheng/workspace/env-xs-ov-00-bpu/tests") directory as the directory for the new test case. The name of the subdirectory should be the name of the test case. You can add any code files in this directory, just make sure that the entry file of the test case is `test_<test name>.py`. In this file, the entry function of the test case also needs to be named `test_<test name>`. You can write one or more entry files and entry functions.
In each entry function, you need to follow the format below:
```python
import mlvp.funcov as fc
from mlvp.reporter import set_func_coverage, set_line_coverage
def test_mydut(request):
# Create DUT, and specify the waveform file and coverage file name for this test
# Please note that the waveform file and coverage file names corresponding to each test function should be different, otherwise the files will be overwritten
my_dut = DUTMydut(waveform_filename="my_test.fst", coverage_filename="my_test_coverage.dat")
# Specify function coverage rules
g1 = fc.CovGroup("group1")
# ...
g2 = fc.CovGroup("group2")
# ...
# Test running code
# ...
# End the test, and enter the coverage information. The coverage file name should be the same as the coverage file name specified above
my_dut.finalize()
set_func_coverage(request, [g1, g2])
set_line_coverage(request, "my_test_coverage.dat")
```
After the test case is written, you can directly run in the `tests` directory:
```python
make TEST=<test case name> run
```
This will automatically complete the running of the test case, waveform generation, coverage statistics, and test report generation.
When the local test passes, you can submit the test case. When submitting, the test results in the test report need to meet the following requirements:
1. All test cases pass
2. Code line coverage is greater than 95%
3. Function coverage reaches 100%
### Log Output
In the mlvp library, a dedicated logger is provided. We recommend using this logger to record information during the test.
Specifically, you can record logs in the following way:
```python
import mlvp
mlvp.debug("This is a debug message", extra={"log_id": "dut"})
mlvp.info("This is an info message")
mlvp.warning("This is a warning message", extra={"log_id": "bundle"})
mlvp.error("This is an error message")
mlvp.critical("This is a critical message")
```
If you need to change the log recording format, log level, and write to file information, you can set it by calling the `setup_logging` function in the `mlvp` library:
```python
def setup_logging(
log_level =logging.INFO,
format=default_format,
console_display=True,
log_file=None)
```
## Suggested Verification Process (Must Read)
**1. Read the document and sort out the test points.** When reading the BPU document, you can sort out and refine the function points.
**2. Read the code, encapsulate and drive the DUT.** The code contains all implementation details, based on which you can encapsulate the basic functions of the DUT into individual functions. Then test whether these function features are normal.
**3. Write corresponding test cases based on the test points.** Based on the test points and DUT basic function functions, complete the testing of most functions. (**Don't write the reference model right away**)
**4. Write the reference model.** When all basic function points have been tested, you can complete the writing of the reference model based on your understanding. (If all function points have been tested and the function and code line coverage have met the requirements, the reference model can be ignored)
**5. Random full system testing.** At the same time, randomly drive the DUT and reference model, compare the test results. Perform coverage analysis and construct specific inputs to improve coverage.
**6. Write the test report.** Complete the document writing according to the report format requirements in the basic document.
*Note: During the above verification process, if you find a bug, you can submit it at any time through PR.

152
doc/join_en.md Normal file
View File

@ -0,0 +1,152 @@
## How to Participate in this Event
Participating in this verification event can earn you a chance to win generous prizes. [Registration Form](https://iz9a87wn37.feishu.cn/share/base/form/shrcnwpiyWaVUzyo47QdPBGy5Yd)
### Process Introduction
The process of participation is as shown below:
<img src="/.github/image/ov-pipline.svg" width="800px">
#### (0) Online Registration
There is no limit to the number of team members for this event. It can be one or more. The final prize will be distributed to the team leader according to the team's points. During the event, the team name and the name of the team leader will be made public in the group.
#### (1) Qualification Verification
In order to assess the capabilities of the participating teams, this event provides several DUTs with known bugs for the teams to perform verification tests. Teams that can find more than 80% of the bugs and can perform root cause analysis can qualify to participate. The number of bugs found will also become the corresponding team points.
#### (2) Fork Repository
Fork this repository, then set up the environment locally and participate in the testing tasks.
#### (3) Task Assignment
Tasks are described in the Issues of this repository. Team leaders sign up in the form of Issues under the Issues. The administrator will do the statistics, and then announce it in the group.
#### (4) Decompose Test Points & Write Verification Plan
Decomposing test points and verification plans are important steps in the chip verification process, which directly affect the verification results. At this stage, you need to submit the corresponding report through PR, and the organizer will review and score the report.
#### (5) Write Test Cases
Write test cases, the specific format can refer to the template in the [`tests`](command:_github.copilot.openRelativePath?%5B%7B%22scheme%22%3A%22file%22%2C%22authority%22%3A%22%22%2C%22path%22%3A%22%2Fhome%2Fyaozhicheng%2Fworkspace%2Fenv-xs-ov-00-bpu%2Ftests%22%2C%22query%22%3A%22%22%2C%22fragment%22%3A%22%22%7D%5D "/home/yaozhicheng/workspace/env-xs-ov-00-bpu/tests") directory. Test cases need to cover the corresponding test points.
#### (6) Write Test Code & Discover Bugs in Testing
During the testing process, after discovering a bug, you need to analyze the bug.
#### (7) Report Bugs through Issue and PR to Earn Points
You can submit bugs at any time through PR, but the organizer will only review bugs for a team once a day. The organizer will issue corresponding points based on the type and level of the bug.
#### (8) Write Test Report
Refer to the template [NutShell Cache](https://open-verify.cc/mlvp/docs/basic/report/) for writing.
#### (9) PPT Online Defense
Write PPT online for the defense of the entire verification task. The defense is organized centrally according to the completion situation of the team.
## Basic Tasks
### Task 1. uFTB & TFB
Source code address: [FauFTB.sv](https://github.com/XS-MLVP/env-xs-ov-00-bpu/tree/main/rtl/uFTB)
Function description document: [uFTB Branch Predictor](https://open-verify.cc/xs-bpu/en/docs/modules/01_uftb/)
Reference function point document: [uFTB Function List](https://open-verify.cc/xs-bpu/en/docs/feature/01_uftbfeature/)
Source code address: [FTB.sv](https://github.com/XS-MLVP/env-xs-ov-00-bpu/tree/main/rtl/FTB)
Function description document: [FTB Branch Predictor](https://open-verify.cc/xs-bpu/en/docs/modules/03_ftb/)
Reference function point document: [FTB Function List](https://open-verify.cc/xs-bpu/en/docs/feature/02_ftbfeature/)
1. Complete the code and document reading of the uFTB sub-predictor, understand the working principle and module functions of the uFTB. Clarify the structure of the FTB item cache used by the uFTB.
2. Based on the given reference function points, improve the function points that uFTB needs to verify, and decompose specific test points for these function points. At the same time, explain the significance of each test point to verify the function points.
3. Based on the decomposed test points, complete the test case writing for uFTB. Test cases need to cover all test points. At the same time, a detailed explanation of the test cases is required, including the purpose, input, output, expected results, and principles of the test cases.
4. Complete the code writing of the test cases for uFTB, it is recommended to complete **the writing of the reference model**. Ensure that the test cases can run through the verification environment. In the coding process, the quality of the code needs to be ensured, including the readability, maintainability, and scalability of the code.
5. Complete the running of the test cases for uFTB and generate a test report. The test report needs to include the running results of the test cases, code line coverage, function coverage, etc. The test report needs to be saved in the [`tests/report`](command:_github.copilot.openRelativePath?%5B%7B%22scheme%22%3A%22file%22%2C%22authority%22%3A%22%22%2C%22path%22%3A%22%2Fhome%2Fyaozhicheng%2Fworkspace%2Fenv-xs-ov-00-bpu%2Ftests%2Freport%22%2C%22query%22%3A%22%22%2C%22fragment%22%3A%22%22%7D%5D "/home/yaozhicheng/workspace/env-xs-ov-00-bpu/tests/report") directory, and you can open `tests/report/uFTB-yourId.html` in the browser to view the content of this test report.
6. Before the final submission, you need to check the test report to ensure that the test report meets the basic requirements for submitting PR.
### Task 2. TageSC
Source code address: [TageSC.sv](https://github.com/XS-MLVP/env-xs-ov-00-bpu/tree/main/rtl/TageSC)
Function description document: [TAGE-SC Branch Predictor](https://open-verify.cc/xs-bpu/en/docs/modules/02_tage_sc/)
Reference function point document: [TAGE-SC Function List](https://open-verify.cc/xs-bpu/en/docs/feature/03_tagescfeature/)
1. Complete the code and document reading of the TageSC sub-predictor, understand the working principles and functions of the Tage and SC single modules, and then understand the overall functions of the TageSC module. Clarify the structure of the Tage and SC table items used, and the working principle of branch folding history.
2. Based on the given reference function points, improve the function points that Tage, SC, TageSC need to verify, and decompose specific test points for these function points. At the same time, explain the significance of each test point to verify the function points.
3. Based on the decomposed test points, complete the test case writing for TageSC. Test cases need to cover all test points. At the same time, a detailed explanation of the test cases is required, including the purpose, input, output, expected results, and principles of the test cases.
4. Complete the code writing of the test cases for TageSC, it is required to complete **the writing of the reference model**. Ensure that the test cases can run through the verification environment. In the coding process, the quality of the code needs to be ensured, including the readability, maintainability, and scalability of the code.
5. Complete the running of the test cases for TageSC and generate a test report. The test report needs to include the running results of the test cases, code line coverage, function coverage, etc. The test report needs to be saved in the [`tests/report`](command:_github.copilot.openRelativePath?%5B%7B%22scheme%22%3A%22file%22%2C%22authority%22%3A%22%22%2C%22path%22%3A%22%2Fhome%2Fyaozhicheng%2Fworkspace%2Fenv-xs-ov-00-bpu%2Ftests%2Freport%22%2C%22query%22%3A%22%22%2C%22fragment%22%3A%22%22%7D%5D "/home/yaozhicheng/workspace/env-xs-ov-00-bpu/tests/report") directory, and you can open `tests/report/TageSC-yourId.html` in the browser to view the content of this test report.
6. Before the final submission, you need to check the test report to ensure that the test report meets the basic requirements for submitting PR.
### Task 3. ITTAGE
Source code address: [ITTAGE.sv](https://github.com/XS-MLVP/env-xs-ov-00-bpu/tree/main/rtl/ITTAGE)
Function description document: [ITTAGE Branch Predictor](https://open-verify.cc/xs-bpu/en/docs/modules/04_ittage/)
Reference function point document: [ITTAGE Function List](https://open-verify.cc/xs-bpu/en/docs/feature/04_ittagefeature/)
1. Complete the code and document reading of the ITTAGE sub-predictor, understand the working principles and functions of the ITTAGE. Clarify the structure of the Tage table items used by ITTAGE, understand the predictor structure of ITTAGE. Also understand the working principle of branch folding history.
2. Based on the given reference function points, improve the function points that ITTAGE needs to verify, and decompose specific test points for these function points. At the same time, explain the significance of each test point to verify the function points.
3. Based on the decomposed test points, complete the test case writing for ITTAGE. Test cases need to cover all test points. At the same time, a detailed explanation of the test cases is required, including the purpose, input, output, expected results, and principles of the test cases.
4. Complete the code writing of the test cases for ITTAGE, it is required to complete **the writing of the reference model**. Ensure that the test cases can run through the verification environment. In the coding process, the quality of the code needs to be ensured, including the readability, maintainability, and scalability of the code.
5. Complete the running of the test cases for ITTAGE and generate a test report. The test report needs to include the running results of the test cases, code line coverage, function coverage, etc. The test report needs to be saved in the [`tests/report`](command:_github.copilot.openRelativePath?%5B%7B%22scheme%22%3A%22file%22%2C%22authority%22%3A%22%22%2C%22path%22%3A%22%2Fhome%2Fyaozhicheng%2Fworkspace%2Fenv-xs-ov-00-bpu%2Ftests%2Freport%22%2C%22query%22%3A%22%22%2C%22fragment%22%3A%22%22%7D%5D "/home/yaozhicheng/workspace/env-xs-ov-00-bpu/tests/report") directory, and you can open `tests/report/ITTAGE-yourId.html` in the browser to view the content of this test report.
6. Before the final submission, you need to check the test report to ensure that the test report meets the basic requirements for submitting PR.
Translate the following markdown from Chinese to American English:
### Task 4. RAS
Source code address: [RAS.sv](https://github.com/XS-MLVP/env-xs-ov-00-bpu/tree/main/rtl/RAS)
Function description document: [RAS Branch Predictor](https://open-verify.cc/xs-bpu/en/docs/modules/05_ras/)
Reference function point document: [RAS Function List](https://open-verify.cc/xs-bpu/en/docs/feature/05_rasfeature/)
1. Complete the code and document reading of the RAS sub-predictor, understand the working principle and module function of RAS. Understand the working principle of the stack frame when the program runs, and then clarify the working principle of the RAS stack of RAS. Most clearly, the RAS predictor provides predictions for call and ret instructions.
2. Based on the given reference function points, improve the function points that RAS needs to verify, and decompose specific test points for these function points. At the same time, explain the significance of each test point to the verification function point.
3. Based on the decomposed test points, complete the test case writing for RAS. The test cases need to cover all test points. At the same time, a detailed explanation of the test cases is required, including the purpose, input, output, expected results, and principles of the test cases.
4. Complete the code writing of the RAS test cases, and require the completion of **writing the reference model**. Ensure that the test cases can pass the verification environment. During the coding process, the quality of the code needs to be ensured, including the readability, maintainability, and scalability of the code.
5. Complete the running of the RAS test cases and generate a test report. The test report needs to include the running results of the test cases, code line coverage, function coverage, and other information. The test report needs to be saved in the [`tests/report`](command:_github.copilot.openRelativePath?%5B%7B%22scheme%22%3A%22file%22%2C%22authority%22%3A%22%22%2C%22path%22%3A%22%2Fhome%2Fyaozhicheng%2Fworkspace%2Fenv-xs-ov-00-bpu%2Ftests%2Freport%22%2C%22query%22%3A%22%22%2C%22fragment%22%3A%22%22%7D%5D "/home/yaozhicheng/workspace/env-xs-ov-00-bpu/tests/report") directory, and you can open `tests/report/RAS-yourId.html` in the browser to view the content of this test report.
6. Before the final submission, you need to check the test report to ensure that the test report meets the basic requirements for submitting PR.
### Task 5. BPU Top
Source code address: [Predictor.sv & Composer.sv](https://github.com/XS-MLVP/env-xs-ov-00-bpu/tree/main/rtl/common)
Function description document: [BPU Top Module](https://open-verify.cc/xs-bpu/en/docs/modules/00_bpu_top/)
Reference function point document: [BPU Top Function List](https://open-verify.cc/xs-bpu/en/docs/feature/00_bpufeature/)
1. Complete the code and document reading of the BPU Top sub-predictor, understand the working principle and module function of BPU Top. Understand the predictor structure of BPU Top, clarify how the BPU Top predictor provides predictions for different types of branches. At the same time, understand how BPU Top interacts with the external FTQ module.
2. Based on the given reference function points, improve the function points that BPU Top needs to verify, and decompose specific test points for these function points. At the same time, explain the significance of each test point to the verification function point.
3. Based on the decomposed test points, complete the test case writing for BPU Top. The test cases need to cover all test points. At the same time, a detailed explanation of the test cases is required, including the purpose, input, output, expected results, and principles of the test cases.
4. Complete the code writing of the BPU Top test cases, and require the completion of **writing the reference model and the FTQ simulation verification environment**. Ensure that the test cases can pass the verification environment. During the coding process, the quality of the code needs to be ensured, including the readability, maintainability, and scalability of the code.
5. Complete the running of the BPU Top test cases and generate a test report. The test report needs to include the running results of the test cases, code line coverage, function coverage, and other information. The test report needs to be saved in the [`tests/report`](command:_github.copilot.openRelativePath?%5B%7B%22scheme%22%3A%22file%22%2C%22authority%22%3A%22%22%2C%22path%22%3A%22%2Fhome%2Fyaozhicheng%2Fworkspace%2Fenv-xs-ov-00-bpu%2Ftests%2Freport%22%2C%22query%22%3A%22%22%2C%22fragment%22%3A%22%22%7D%5D "/home/yaozhicheng/workspace/env-xs-ov-00-bpu/tests/report") directory, and you can open `tests/report/BPU-Top-yourId.html` in the browser to view the content of this test report.
6. Before the final submission, you need to check the test report to ensure that the test report meets the basic requirements for submitting PR.
## Milestones
1. Complete function point and test point decomposition: After completing the reading work of `1.`, proceed to `2.`. You can start the following work after communicating with us to confirm the completion of `2.`.
2. Complete test case communication: After completing `3.`, communicate with us to confirm that the behavior of `3.` meets expectations and the principle is correct before you can start the following work.
3. Complete the test report: After the process of `4.` and `5.`, a test report will be generated. When the report finally meets the basic requirements for PR submission after continuous iteration, and we review and pass, it is considered that the verification task is completed. During the process of completing the iteration, if you encounter bugs, you need to communicate with us, and we will give corresponding points. At the same time, if there are **questions worth asking**, you can discuss them in the discussion group.
## Basic requirements for PR submission
Coverage requirements:
1. The code line coverage needs to be greater than 95%, and explain the reasons for the uncovered code.
1. The function coverage must reach 100%
1. The coverage is obtained by running the verification environment, and the report and generation logic cannot be modified
Document requirements:
1. Reasonably decompose the function points into test points. Function points can be added by yourself, but the original function points cannot be deleted.
1. The designed test cases must cover all test points and function points
1. Please write the test document according to the template
1. After receiving multiple verification tasks, their verification reports need to be written separately
Ways to get points:
1. Reasonable test point decomposition, reasonable verification plan, standardized verification document
1. Find out the bug and analyze the cause of the bug, get points according to the bug confirmation level
1. Fix errors in the project's documents and code, and get points according to the error level
1. Submit the final verification report, get points according to the report quality, coverage
1. Final report score