Go to file
王洋 4c6839c77c modify the document 2021-11-20 23:10:21 +08:00
documents modify the document 2021-11-20 23:10:21 +08:00
input Initial commit 2021-10-23 17:32:59 +08:00
model Initial commit 2021-10-23 17:32:59 +08:00
vis Initial commit 2021-10-23 17:32:59 +08:00
LICENSE Initial commit 2021-10-23 16:42:08 +08:00
LSTM.png Update README 2021-10-26 22:55:26 +08:00
README.md modify the document 2021-11-20 23:10:21 +08:00
algorithms.py Update algorithms.py 2021-11-20 21:42:40 +08:00
data_gen.py ADD file via upload 2021-10-23 16:55:42 +08:00
default_params.py ADD file via upload 2021-10-23 16:55:51 +08:00
fall_detector.py ADD file via upload 2021-10-23 16:56:06 +08:00
flowchart.png Update README 2021-10-26 11:14:30 +08:00
helpers.py Update helpers.py 2021-11-20 21:40:45 +08:00
inv_pendulum.py ADD file via upload 2021-10-23 16:56:38 +08:00
lstm.sav ADD file via upload 2021-10-23 16:56:53 +08:00
lstm2.sav ADD file via upload 2021-10-23 16:57:09 +08:00
model.py ADD file via upload 2021-10-23 16:57:19 +08:00
process_data.py ADD file via upload 2021-10-23 16:57:30 +08:00
process_data2.py ADD file via upload 2021-10-23 16:57:39 +08:00
processor.py ADD file via upload 2021-10-23 16:57:48 +08:00
requirements.txt ADD file via upload 2021-10-23 16:58:05 +08:00
visual.py ADD file via upload 2021-10-23 16:58:22 +08:00
writer.py ADD file via upload 2021-10-23 16:58:32 +08:00

README.md

基于 OpenPifPaf 的多摄像头多人实时跌倒等异常行为识别预警应用研究

outfallingdown

outfallingdown

利用 OpenPifPaf 对输入视频进行人体姿势估计,然后通过长短时记忆神经网络LSTM从前面得到的姿势信息中提取五个时间和空间特征作为当前的 Xn 输入)以预测"跌倒"动作,支持多摄像头和多人实时检测。模型在 UP-Fall Detection 数据集上训练,基于 PyTorch 实现。

LSTM

LSTM

安装

pip install -r requirements.txt

使用

python fall_detector.py --num_cams=1

完整运行代码

usage: fall_detector.py [-h] [--seed-threshold SEED_THRESHOLD]
                        [--instance-threshold INSTANCE_THRESHOLD]
                        [--keypoint-threshold KEYPOINT_THRESHOLD]
                        [--decoder-workers DECODER_WORKERS]
                        [--dense-connections]
                        [--dense-coupling DENSE_COUPLING] [--caf-seeds]
                        [--no-force-complete-pose]
                        [--profile-decoder [PROFILE_DECODER]]
                        [--cif-th CIF_TH] [--caf-th CAF_TH]
                        [--connection-method {max,blend}] [--greedy]
                        [--checkpoint CHECKPOINT] [--basenet BASENET]
                        [--headnets HEADNETS [HEADNETS ...]] [--no-pretrain]
                        [--two-scale] [--multi-scale] [--no-multi-scale-hflip]
                        [--cross-talk CROSS_TALK] [--no-download-progress]
                        [--head-dropout HEAD_DROPOUT] [--head-quad HEAD_QUAD]
                        [--resolution RESOLUTION] [--resize RESIZE]
                        [--num_cams NUM_CAMS] [--video VIDEO] [--debug]
                        [--disable_cuda] [--plot_graph] [--joints]
                        [--skeleton] [--coco_points] [--save_output]
                        [--fps FPS] [--out-path OUT_PATH]
                        [--input_direct INPUT_DIRECT]

optional arguments:
  -h, --help            show this help message and exit
  --resolution RESOLUTION
                        Resolution prescale factor from 640x480. Will be
                        rounded to multiples of 16. (default: 0.4)
  --resize RESIZE       Force input image resize. Example WIDTHxHEIGHT.
                        (default: None)
  --num_cams NUM_CAMS   Number of Cameras. (default: 1)
  --video VIDEO         Path to the video file. For single video fall
                        detection(--num_cams=1), save your videos as abc.xyz
                        and set --video=abc.xyz For 2 video fall
                        detection(--num_cams=2), save your videos as abc1.xyz
                        & abc2.xyz and set --video=abc.xyz (default: None)
  --debug               debug messages and autoreload (default: False)
  --disable_cuda        disables cuda support and runs from gpu (default:
                        False)

decoder configuration:
  --seed-threshold SEED_THRESHOLD
                        minimum threshold for seeds (default: 0.5)
  --instance-threshold INSTANCE_THRESHOLD
                        filter instances by score (default: 0.2)
  --keypoint-threshold KEYPOINT_THRESHOLD
                        filter keypoints by score (default: None)
  --decoder-workers DECODER_WORKERS
                        number of workers for pose decoding (default: None)
  --dense-connections   use dense connections (default: False)
  --dense-coupling DENSE_COUPLING
                        dense coupling (default: 0.01)
  --caf-seeds           [experimental] (default: False)
  --no-force-complete-pose
  --profile-decoder [PROFILE_DECODER]
                        specify out .prof file or nothing for default file
                        name (default: None)

CifCaf decoders:
  --cif-th CIF_TH       cif threshold (default: 0.1)
  --caf-th CAF_TH       caf threshold (default: 0.1)
  --connection-method {max,blend}
                        connection method to use, max is faster (default:
                        blend)
  --greedy              greedy decoding (default: False)

network configuration:
  --checkpoint CHECKPOINT
                        Load a model from a checkpoint. Use "resnet50",
                        "shufflenetv2k16w" or "shufflenetv2k30w" for
                        pretrained OpenPifPaf models. (default: None)
  --basenet BASENET     base network, e.g. resnet50 (default: None)
  --headnets HEADNETS [HEADNETS ...]
                        head networks (default: None)
  --no-pretrain         create model without ImageNet pretraining (default: True)
  --two-scale           [experimental] (default: False)
  --multi-scale         [experimental] (default: False)
  --no-multi-scale-hflip
                        [experimental] (default: True)
  --cross-talk CROSS_TALK
                        [experimental] (default: 0.0)
  --no-download-progress
                        suppress model download progress bar (default: True)

head:
  --head-dropout HEAD_DROPOUT
                        [experimental] zeroing probability of feature in head
                        input (default: 0.0)
  --head-quad HEAD_QUAD
                        number of times to apply quad (subpixel conv) to heads
                        (default: 1)

Visualisation:
  --plot_graph          Plot the graph of features extracted from keypoints of
                        pose. (default: False)
  --joints              Draw joints keypoints on the output video. (default: True)
  --skeleton            Draw skeleton on the output video. (default: True)
  --coco_points         Visualises the COCO points of the human pose. (default: False)
  --save_output         Save the result in a video file. Output videos are
                        saved in the same directory as input videos with "out"
                        appended at the start of the title (default: False)
  --fps FPS             FPS for the output video. (default: 18)
  --out-path OUT_PATH   Save the output video at the path specified. .avi file
                        format. (default: result.avi)
  --input_direct INPUT_DIRECT
                        Save the input link to images directory. (default: None)
  • 模型输入可以直接为摄像头作为视频源或者用下载好的视频作为视频源。
  • 如果在非服务器端可以通过设置在窗口进行实时画面的显示。

参考

  • OpenPifPaf
  • UP-fall detection Dataset
  • Multi-camera, multi-person, and real-time fall detection using long short term memory
  • Lei Wang, Du Q. Huynh, Piotr Koniusz. A Comparative Review of Recent Kinect-based Action Recognition Algorithms[J]. IEEE TRANSACTIONS ON IMAGE PROCESSING,2019.
  • Nusrat Tasnim , Mohammad Khairul Islam and Joong-Hwan Baek. Deep Learning Based Human Activity Recognition Using Spatio-Temporal Image Formation of Skeleton Joints[J].applied sciences.
  • Mickael Delamare, Cyril Laville, Adnane Cabani and Houcine Chafouk. Graph Convolutional Networks Skeleton-based Action Recognition for Continuous Data Stream: A Sliding Window Approach[J]. 16th International Conference on Computer Vision Theory and Applications.
  • Tasweer Ahmad, Lianwen Jin, Xin Zhang, Songxuan Lai, Guozhi Tang, and Luojun Lin. Graph Convolutional Neural Network for Human Action Recognition: A Comprehensive Survey[J].
  • Zehua Sun, Jun Liu, Qiuhong Ke, Hossein Rahmani, Mohammed Bennamoun, and Gang Wang. Human Action Recognition from Various Data Modalities: A Review[J].