Newer
Older
<!--
<p align="center">
<img src="https://github.com///raw/main/docs/source/logo.png" height="150">
</p>
-->
<h1 align="center">
Compute fcover from 4P phenoscript output or local image folder using a DL model.
You can perform elimination of out of ranks vegetation by using the dedicated post-processing (see more advenced usage and Method sections).
The script produce visualizations for all images analyzed as well as a csv file containing results, warnings and potentially some meta-data.
## 🚀 Installation
#### Install from git:
```bash
pip install git+https://forgemia.inra.fr/ue-apc/modules/vegetationcoverfraction
```
#### Installation in development mode:
```bash
git clone https://forgemia.inra.fr/ue-apc/modules/vegetationcoverfraction
cd vegetationcoverfraction
pip install .
```
#### Command Line Interface
vegetationcoverfraction have a CLI writting with [Typer](https://typer.tiangolo.com/). The command to call in shell is fcover_rgb_drone. Call help to have all options:
WARNING1: The CLI is less flexible than the python script (see more advenced usage).
WARNING2: If you change the cli.py script you have to reinstall vegetationcoverfraction to see the update the CLI typer application.
## 👐 Contributing
To ask for a modificatioins, bug fix or improvement proposal please add an issue on the project if you have access rights or contact maintainers:
* Jordan Bernigaud Samatan (jordan.bernigaud-samatan@inrae.fr).
## Authors
* Jordan Bernigaud Samatan (Jordan Bernigaud Samatan)
### ⚖️ License
The code in this package is licensed under the MIT License.
In a python script you can import the module after installation (see begining of README). Then simply call the Predictor and implement the optionals pre/post processings that you want.
```python
from vegetationcoverfraction.predictor import FcoverPredictor
from vegetationcoverfraction.available_pre_post_treatment import (
Deskew,
RemoveOutOfRanks,
)
pred = FcoverPredictor(
"4PUAV",
"path/to/data",
"results",
device="cuda",
image_pre_operation=Deskew(),
prediction_post_processing=RemoveOutOfRanks(
nb_of_ranks=6, relative_height_width=0.8
),
)
```
Here you are more flexible with the different parameters to be played with.
You can also create your custom pre/post processing by importing the BasePreTreatment and BasePostTreatment abstract classes and implement the call function (see already implemented exemples).
# Method
## Global segmentation
We perform vegetation (including weed) segmentation via deep learning and using VegAnn model.

## Remove out of ranks
The post processing RemoveOutofRanks (required : number of ranks to find) works according to the following steps
### Detect lines in binary mask
We use the Hough transfor to infer the ranks localisation.

We then extract the median angle of all lines and rotate the image so the lines are horizontal if it's not the case.
### Get rank width
Project mask on y axis, perform smoothing of the signal, perform background removal and peak extraction. For peak extraction we keep peaks that have value >= 0.5 * max peak (tunable parameter)



We then estimate the width of the rank as the width corresponding to 0.8 * peak height (this parameter is tunable).
Finally the width is multiplied by 1 (tunable parameter).
We finally obtain the ranks bands

which is then combined to the semantic mask.
Here we use voluntarily a very hard and contaminated plot.
But on an easier image :

## Hue thresholding
You can call for a Hue thresholding if required. The already implemented one move the image to hsv and threshold hue to keep values between 0.18 and 0.5 (parameter tunable).
Combined with vegetation/background deep learning model it can generate green cover fraction.

