アプリとサービスのすすめ

アプリやIT系のサービスを中心に書いていきます。たまに副業やビジネス関係の情報なども気ままにつづります

Good and bad example with MiDaS Depth Estimation (experiment)

I examined why MiDaS can work well or not, depending on images.
If Midas work well, there has to be specific patterns. Also there has to be bad patterns either.


Then, in order to confirm there would be better and bad pattern for MiDaS Depth estimation. Especially in specific condition in image.
I took experiment with 3 patternps this time.

This is memo for my own sake, so I wrote this article as I like, not as formal.

bad pattern

This is bad pattern.

Original image

Predicted image


Why this is bad pattern

1. The target object for prediction are small
2. Very close object are in sight and disturbing prediction. In this case, edge of car is the one.
3. The image had to focus on plotting target object for estimation mainly.


Total prediction stream

Better pattern

This is better pattern. I changed image conditions.

Changed conditions

1. I cropped height and width both. Also erase unnecessary parts both side.
2. Let target objects for estimation plot bigger than previous bad pattern.
Original image


Predicted image


Why this is better pattern

1. Target object for prediction in image are plot big enough to predict.
2. Cropping not moving sight(unnecessary parts) and these effect made prediction better

Total prediction stream(gif)


Best pattern

This is best pattern. I changed image conditions more.


Changed conditions

1. Crop height and erase unnecessary parts.
2. I didn't crop width to keep image balance, which means adjusting image size well-balanced enough to see target object and easy enough to predict
3. Let target objects for estimation plot big


Original image


Predicted image


Why this is best pattern

1. Target object for prediction in image are plot big enough to predict.
2. Image size are well-balanced so it became easy to predict.
=> in previous case, cropped too many, though target objects are big enough to predict. However image size are unbalanced for prediction.

3. The target object is big and image size is well-balanced. These 2 make prediction work better than any pattern

Total prediction stream(gif)


To make MiDaS work better, you have to adjust conditions good for prediction.

death note episode1 subtitle

day in, day out. same news permanent repeat. this is all so ridiculous. this world is rotten. how to use it. human whose name is written in notebook shall die. at mention, twisted. it’s no different from chain letter you get. it must be something wrong with me, even consider it. depending on person you have in mind, you can kill them easily or painful. it is pretty detail for a prank. so I write name and person dies. I can’t believe that.

 

wait. on the off chance someone really die, that would make me a murder? that’s completely impossible. looks like the notebook doesn't work. I didn't expect to anyway. the suspect suddenly collapsed. a heart attack? no way. it’s coincidence. this has to be coincidence. it’s already 6:30 you know? don’t you have clam school tonight? I’m already just a sec. it’s unlikely but by chance this thing is for real? I have to test one more time. in which case it should probably be another criminal. if he is too well known, that would be problem. if he dies, there is a good possibility the news of the death will be covered up. maybe i should try to be killing him. no, I should definitely avoid killing people I know. and again, it really matter? I mean anyone really care if the guy disappear one day? once you actually start looking around, it makes you wonder it would be doing the society favor by getting rid of these people.

hey baby. where are you going? come and play with us. what’s up little lady? this is Takuo Shibuimaru. now, what will happen? Takuo, look out! that settled it. deathnote is for real. he just came out of nowhere. now then, I’d better get going. you head out somewhere, ryuku? that’s true. I dropped my death note. you were really messing up didn't you. hold on and said you tricked old man to get a second book. don’t tell me you dropped both of them. you must have some idea where you dropped things? yeah, the human world. what? 

 

please not interrupt me ok? as you wish. do you want anything all do you? you can ask whatever you like. I’ve already got what I want.

you are quite liking to it? no reason to at surprised. I am sinigami ryuku. that used to be my notebook. judging by your laughter, you’ve already figured out what you have as no ordinary notebook. sinigami, the god of death. I'm not surprised in fact. I’ve been waiting for you. I’ve already figured out this death note I found as real. it didn’t take me long. and now I witnessed its proof of its power, I only feel more confident what I am gonna do. that’s interesting. I certainly wasn’t expecting this. several death note may be on the way human world in the past, but you are the first wrote this many names. look at how many people you've killed in only five days. most reluctant to write this much. I’ve already prepared myself. I used the notebook even I knew it belongs to shinigami. now that shinigami has come. so, what will happen to me? you are here to take my soul, right? huh? what do you mean? is that some fantasy you humans came up with? I’am not gonna do anything to you. the notebook becomes the part of the human lands from the moment it touches their earth. in others words the notebook is now yours. this is mine? if you don’t want it, just give it to someone else. but if give it away, I have no choice but erase your memories of the notebook. so then you are saying I can use death note all I want and won't be punished? let’s just say this. 

when it’s your time to die, it will follow me to write your name in my death note. be warned any human who used death note can not go to heaven or hell eternity. that’s all. now you have something look forward to after you die. it will be all right. answer it. you’ll ruin your eyesight. the notebook you found originally belonged to me. since you are now using it, you are the only one who able to see me. of course my voice in only be heard by you. in other words death note is bond between human light and Ryukyu shinigami. I just have one more question I wanna ask you. why I was chosen for this? hey are you even listening? apples in the human world is worth the trip. what the best way to describe these? juicy? just answer my question. I didn’t choose you. don’t you see this is all the accidents? you actually thought you are chosen because you are so smart or something? don’t be so vain. it just happened to fall around here and you just happened to have picked it up. that’s all. that’s why I wrote instruction in English that is most popular language in the human world. then why did you drop it in the first place? you even wrote down specific instructions. don't try to tell me it was an accident. you are asking me why? I did because I was bored. you are bored? most of the time rather they are taking a nap or gambling. if you take the time to write the name in death note, the others just laugh at you “why are you working so hard?” if you wrote a name of the another shinigami, it would be pointless because they wouldn’t die. and since we live in shinigami world, bringing no amusement to kill those in human world either. so I figured out I would have more fun if I came down here myself. anyway I am surprised at how many name you’ve written. but I wanted to know why you only wrote down the cause of death for the guy who was hit by the truck. if you don't write down the cause of death, the victim dies by the heart attack. and that’s probably the best thing about the death note. you see, I’ve already exhausted rest of the world’s major criminals, and eventually I am getting rid of them all. what’s the point of doing that? it’s only matter of time before people figured out these criminals being eliminated by someone. I want the world to know of my existence and that there’s someone passing rightest judgement on wicked. why even bother? why are you trying to achieve by passing judgement on them? I mean why do you care. because I’ve been bored too. I wasn’t ready to believe it at first but it is obvious now. the something about death note itself makes human want to try it at least once. I killed both. I killed two men, human lives. besides, who might pass judgement on another. no, wait. maybe I’m not wrong. this is exactly what I’m thinking about. the world is rotten. knows who are rotten deserve to die. at first I wrote the name of worst criminals I can think of. like I was cleaning up the world at a time. so that eventually no one will ever do anything evil again. while truly guilty one who deserve to be punished for their crimes die of heart attacks, people who are less guilty but still make trouble for others will slowly be erased through disease or accident of death. only then the world starts to move in the right direction. it’ll be the new world, free even justice, populated by people I’ve judged as earnest, kind and hard worker. but if you did that, it would make you the only bad person left. huh? I have no idea what you are talking about. I am hard working, earnest student, who’s considered to be one of Japan’s best and bravest. and I will become a god of the new world. it’s just I thought. humans are so interesting.

English-Translation-Problems for speaking

important to consider about "Speech and writing"

In conversation, there is no time for planning. Expressions come out more or less off the cuff (depending on the individual), with no ability to edit, and with immediate need for processing. We hear a chunk of language and at the same time as parsing it, we are already putting together a response to it – in real time.

This speed has consequences for the kind of language we use and hear. When speaking, we rely on recycled expressions, formulae we use over and over again, and less complex structures.

For example, we are happy enough writing and reading a sentence like:

That the human brain can use language is amazing


But in speech, we prefer:

It is amazing that the human brain can use language

Both are grammatical, yet one is simpler and quicker for the brain to decode

reffer from here:https://theconversation.com/the-slippery-grammar-of-spoken-vs-written-english-92912


good and bad expression in conversation


bad
they don't understand the importance of skill in thier life.

good
useful skill make your life free. but they don't realize how important it is.

bad
I can remember how many efforts I made for studying when I am 18 years old.

good
I can remember studying so hard when I'm 18 years old


NO.1



楽しいはずの海外旅行にもトラブルはつきものだ。たとえば,悪天候や自然災害 によって飛行機が欠航し,海外での滞在を延ばさなければならないことはさほど珍し いことではない。いかなる場合でも重要なのは,冷静に状況を判断し,当該地域につ いての知識や情報,さらに外国語運用能力を駆使しながら,目の前の問題を解決しよ うとする態度である。



example answer

If you travel abroad, you will have fun but also encounter accidents you don't expect. It isn't unusual that you have to stay in the foreign country longer than you have planned because of your flight schedule being canceled by sudden events such as natural disasters or bad weather. In such a case, it is important that you try to gather necessary information and knowledge about the place you are staying by using languages you can speak.


ChatGPT with smart expression

Traveling to other countries can be an enjoyable experience. However, unexpected accidents can occur. For instance, flight cancellations due to bad weather or natural disasters are not uncommon. This may result in an extended stay in the country. In such cases, it is essential to deal with the situation by comprehending what is happening, utilizing the information and knowledge about the area where you are staying, and leveraging your foreign language skills.”

ChatGPT4's smart expression


Traveling abroad can be a blast, but also a hassle if things go wrong. Sometimes you might get stuck in a foreign country longer than you intended because of unforeseen circumstances like natural disasters or bad weather. When that happens, you need to be resourceful and use whatever languages you can to find out what you need to know about the place you're staying.




NO2

人と話していて,音楽でも映画でも何でもいいが,何かが好きだと打ち明けると, たいていはすぐさま,ではいちばんのお気に入りは何か,ときかれることになる。こ の問いは,真剣に答えようとすれば,かなり悩ましいものになりうる。いやしくも映 画なり音楽なりの愛好家である以上,お気に入りの候補など相当数あるはずであり, その中から一つをとるには,残りのすべてを捨てねばならない。



my answer

When you talk about what you are interested in something such movie or music with someone, most pepole will ask you about what you like most.  If you are serious about answering this question, it is usually very hard to. This is because if you are so interested in something such as movie or music, you usually have many candidates of what you like most. Therefore, it is very hard to decide which one is the best among the candidates.





ChatGPT with smart expression

“When discussing your interests in movies or music with someone, it is common for them to ask what you like most. However, answering this question can be challenging as you may have many favorites to choose from. Selecting the best among them can be difficult.”





NO3

人間の性格は見かけよりも複雑なので,相手のことが完全に分かることなどある はずがない。とは言うものの,初対面の人物とほんの少し言葉を交わしただけで,そ の人とまるで何十年も前からつきあいがあったかのような錯覚に陥ることがある。こ うしたある種の誤解が,時として長い友情のきっかけになったりもする。



my answer

Because one’s character is more complicated than you might expect, it may be impossible to understand them. Even if you have only had a short chat with someone you’ve met for the first time, you may feel as if you have known them for a long time. This kind of misunderstanding sometimes becomes an opportunity for long relationships.



ChatGPT with smart expression

“Because one’s character is more complex than one might expect, it can be impossible to understand them. Even after only a short chat with someone you’ve just met, you may feel as though you’ve known them for a long time. This kind of misunderstanding can sometimes lead to long-lasting relationships.”



NO4

私の意見では, 現代の若者は性別を問わず自分で調理できることが大切である。料理をおいしく仕上げるためには豊かな想像力や手先の器用さが要求されるので, 心身の健康にとても良い。 食材に意識的になれば自然への関心も高まる。さらに, 料理で友人をもてなすことができると, あるいは人と共同して料理ができると, 絆が深まることは間違いない。

my answer

I think that young people today should learn to cook by themselves regardless gender. Cooking is good for your health mentally and pysically because in order to make delicious food you have to use complicated tool and think what will be needed for it. If you have oppotunities to think about cooking materials, you will become more interested in the nature. for you friends, If you can enjoy for your meals and cook with them, you can have make more strong relationship with them.



ChatGPT4 modify

I think that young people today should learn to cook by themselves regardless of gender. Cooking is good for your health, both mentally and physically, because in order to make delicious food you have to use complex tools and think about what ingredients you will need. If you have opportunities to think about cooking materials, you will become more interested in nature. Cooking can also strengthen your relationships with your friends. If you can enjoy your meals and cook with them, you can have more fun and bond with them.





ChatGPT4 with smart expression

“Cooking is a valuable skill that young people of any gender should learn. It benefits your health in multiple ways, as it requires you to use sophisticated tools and plan ahead for the ingredients you need. Cooking also sparks your curiosity about nature and the sources of your food. Moreover, cooking can enhance your social life, as you can share your meals and recipes with your friends and have a great time together.”



NO.5



今日,睡眠不足は見過ごせない問題となっている。 原因の一つは, 社会全体が深夜も多くの人が起きていることを想定して動いていることである。 照明器具の発達も我々の体内時計を狂わせているのかもしれない。 その一方で多くの学校や会社の始まる時間は変わっていない。 こうして睡眠不足が生まれやすくなり, 日中の集中力の低下を引き起こすのだ。



Today, lack of enough sleep is one of the serious problem, partly because in modern socity most people tend to stay up late at night for many reason. And their life cycle is so different from what it used to be because of technology advanece which let them be able to stay up late at night. On the other hands, most people must get up early for school or work and these system haven't change for a long time. That's why in modern socity most people tend to stay up late at night, resulting in decline of their daily concentration.



conversational way by ChatGPT4

“Many people don’t get enough sleep these days, and that’s a big problem. One reason is that they have to wake up early for school or work, but they also like to stay up late at night. There are so many things to do with the new technology we have. But this changes our natural rhythm of sleeping and waking up. It makes us less focused and alert during the day.”



comment from ChatGPT4

I have simplified some of the words and sentences in your paragraph to make it sound more like spoken English. Here are some tips for converting written English to spoken English:

Use shorter sentences and simpler words. Avoid long and complex sentences that might confuse the listener.
Use contractions like “don’t”, “can’t”, “we’ve”, etc. They make your speech sound more natural and informal.
Use words and phrases that show your attitude or emotion. For example, you can use “and that’s a big problem” or “but they also like to stay up late at night” to express your opinion or contrast two ideas.
Use words and phrases that connect your sentences and ideas. For example, you can use “one reason is that” or “but this changes our natural rhythm” to show the relationship between your sentences.

AWSCLIコマンドの install 方法メモ【2023/03】

2023/03月時点でのAWSCLIコマンドの install方法のメモ



目次
1. MacでAWSCLIをinstall
2. ubuntuでAWSCLIをinstall
追記:ubuntuAWS IOT 関係の準備(AWS IoT Device Client をdownload & build)


1. MacでAWSCLIをinstall

docs.aws.amazon.com

# AWSCLIをinstall
curl "https://awscli.amazonaws.com/AWSCLIV2.pkg" -o "AWSCLIV2.pkg"
sudo installer -pkg AWSCLIV2.pkg -target /
>>>
#installer: Package name is AWS Command Line Interface
#installer: Installing at base path /
#installer: The install was successful.

# pathとversionの確認
which aws
# /usr/local/bin/aws
aws --version
# aws-cli/2.11.4 Python/3.11.2 Darwin/22.2.0 exe/x86_64 prompt/off
# アクセスキーの設定
aws configure
# AWS Access Key ID [None]: *****
# AWS Secret Access Key [None]: *******
# Default region name [None]: ap-northeast-1
# Default output format [None]: json
# when to remove
sudo rm -rf /usr/local/aws
sudo rm /usr/local/bin/aws


2. ubuntuでAWSCLIをinstall

# 必要パッケージのinstall
apt-get -y update
apt-get -y upgrade
apt-get -y autoremove
sudo reboot

apt-get -y install build-essential libssl-dev cmake unzip git python3-pip
gcc --version  # > 9.3.0 
cmake --version # > 3.10.x
openssl version # > 1.1.1
git --version  # > 2.20.1

# https://github.com/pyca/cryptography/issues/7959
pip3 install cryptography==39.0.0
pip3 uninstall pyOpenSSL
pip3 install pyOpenSSL==22.1.0
# gitでinstall
export PATH=$PATH:~/.local/bin
git clone https://github.com/aws/aws-cli.git && cd aws-cli && git checkout v2
pip3 install -r requirements.txt
pip3 install .

aws --version
# > 2.2
# aws-cli/2.11.4 Python/3.8.10 Linux/5.15.0-60-generic source/x86_64.ubuntu.20 prompt/off
# アクセスキーの設定
aws configure
#AWS Access Key ID [None]: *********
#AWS Secret Access Key [None]: ********
#Default region name [None]: ap-northeast-1
#Default output format [None]: json

追記:ubuntuAWS IOT 関係の準備(AWS IoT Device Client をdownload & build)

docs.aws.amazon.com


証明書(certification)の作成

# certification作成
mkdir ~/certs
curl -o ~/certs/AmazonRootCA1.pem https://www.amazontrust.com/repository/AmazonRootCA1.pem
#  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
#                                 Dload  Upload   Total   Spent    Left  Speed
#100  1188  100  1188    0     0  15631      0 --:--:-- --:--:-- --:--:-- 15631
chmod 745 ~
chmod 700 ~/certs
chmod 644 ~/certs/AmazonRootCA1.pem
ls -l ~/certs
# total 4
# -rw-r--r-- 1 hagi hagi 1188 Mar 21 22:51 AmazonRootCA1.pem
cd ~
git clone https://github.com/awslabs/aws-iot-device-client aws-iot-device-client
mkdir ~/aws-iot-device-client/build && cd ~/aws-iot-device-client/build
cmake ../
cmake --build . --target aws-iot-device-client
./aws-iot-device-client --help

>>>>>
{
    "certificateArn": "arn:aws:iot:ap-northeast-1:980023311172:cert/9c84a36ba2bcf5accf3ae7058032410515a237bd580b5037ced2d67857c8dfbc",
    "certificateId": "9c84a36ba2bcf5accf3ae7058032410515a237bd580b5037ced2d67857c8dfbc",
    "certificatePem": "-----BEGIN CERTIFICATE-----\n*****==\n-----END CERTIFICATE-----\n",
    "keyPair": {
        "PublicKey": "-----BEGIN PUBLIC KEY-----\n********\n-----END PUBLIC KEY-----\n",
        "PrivateKey": "-----BEGIN RSA PRIVATE KEY-----\n****==\n-----END CERTIFICATE-----\n",
    "keyPair": {
        "PublicKey": "-----BEGIN PUBLIC KEY-----\n****\n-----END PUBLIC KEY-----
\n",
        "PrivateKey": "-----BEGIN RSA PRIVATE KEY-----\nMIIEowIBAAKCAQEAxdDxl9oN


チュートリアルで使用するディレクトリを作成する

mkdir ~/dc-configs
mkdir ~/policies
mkdir ~/messages
mkdir ~/certs/testconn
mkdir ~/certs/pubsub
mkdir ~/certs/jobs

chmod 745 ~
chmod 700 ~/certs/testconn
chmod 700 ~/certs/pubsub
chmod 700 ~/certs/jobs

chmod 745 ~
chmod 700 ~/certs/testconn
chmod 644 ~/certs/testconn/*
chmod 600 ~/certs/testconn/private.pem.key

ls -l ~/certs/testconn
#total 12
#-rw-r--r-- 1 hagi hagi 1224 Mar 21 23:00 device.pem.crt
#-rw------- 1 hagi hagi 1675 Mar 21 23:00 private.pem.key
#-rw-r--r-- 1 hagi hagi  451 Mar 21 23:00 public.pem.key

AWS IoT リソースの作成

# エンドポイントの確認
$ aws iot describe-endpoint --endpoint-type IoT:Data-ATS
#{
#    "endpointAddress": "a3phz77nfdgsr1-ats.iot.ap-northeast-1.amazonaws.com"
#}
# モノの作成
$ aws iot create-thing --thing-name "DevCliTestThing"
#{
#    "thingName": "DevCliTestThing",
#    "thingArn": "arn:aws:iot:ap-northeast-1:980023311172:thing/DevCliTestThing",
#    "thingId": "c6187956-ef7a-4143-b5dc-6b518d5866df"
#}
# policy用のjsonファイル作成
vi ~/policies/dev_cli_test_thing_policy.json
>>>

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "iot:Publish",
                "iot:Subscribe",
                "iot:Receive",
                "iot:Connect"
            ],
            "Resource": [
                "*"
            ]
        }
    ]
}
#  AWS IoT ポリシーを作成
aws iot create-policy \
> --policy-name "DevCliTestThingPolicy" \
> --policy-document "file://~/policies/dev_cli_test_thing_policy.json"
#{
#    "policyName": "DevCliTestThingPolicy",
#    "policyArn": "arn:aws:iot:ap-northeast-1:980023311172:policy/DevCliTestThingPolicy",
#    "policyDocument": "{\n    \"Version\": \"2012-10-17\",\n    \"Statement\": [\n        {\n            \"Effect\": \"Allow\",\n            \"Action\": [\n                #\"iot:Publish\",\n                \"iot:Subscribe\",\n                \"iot:Receive\",\n                \"iot:Connect\"\n            ],\n            \"Resource\": [\n                #\"*\"\n            ]\n        }\n    ]\n}\n",
 #   "policyVersionId": "1"
#}
certificateArn="arn:aws:iot:ap-northeast-1:980023311172:cert/9c84a36ba2bcf5accf3ae7058032410515a237bd580b5037ced2d67857c8dfbc"
# ポリシーをデバイス証明書にアタッチ。certificateArn を以前に保存した certificateArn の値に置き換えます。
$aws iot attach-policy \
> --policy-name "DevCliTestThingPolicy" \
> --target $certificateArn

# デバイス証明書を AWS IoT のモノのリソースにアタッチ
$ aws iot attach-thing-principal \
> --thing-name "DevCliTestThing" \
> --principal $certificateArn

設定ファイルを作成

mkdir ~/dc-configs
chmod 745 ~/dc-configs
# 設定ファイル用jsonファイルの作成
vi ~/dc-configs/dc-testconn-config.json

{
  "endpoint": "a3phz77nfdgsr1-ats.iot.ap-northeast-1.amazonaws.com",
  "cert": "~/certs/testconn/device.pem.crt",
  "key": "~/certs/testconn/private.pem.key",
  "root-ca": "~/certs/AmazonRootCA1.pem",
  "thing-name": "DevCliTestThing",
  "logging": {
    "enable-sdk-logging": true,
    "level": "DEBUG",
    "type": "STDOUT",
    "file": ""
  },
  "jobs": {
    "enabled": false,
    "handler-directory": ""
  },
  "tunneling": {
    "enabled": false
  },
  "device-defender": {
    "enabled": false,
    "interval": 300
  },
  "fleet-provisioning": {
    "enabled": false,
    "template-name": "",
    "template-parameters": "",
    "csr-file": "",
    "device-key": ""
  },
  "samples": {
    "pub-sub": {
      "enabled": true,
      "publish-topic": "test/dc/pubtopic",
      "publish-file": "",
      "subscribe-topic": "test/dc/subtopic",
      "subscribe-file": ""
    }
  },
  "config-shadow": {
    "enabled": false
  },
  "sample-shadow": {
    "enabled": false,
    "shadow-name": "",
    "shadow-input-file": "",
    "shadow-output-file": ""
  }
}

chmod 644 ~/dc-configs/dc-testconn-config.json

AWS IoT Device Client を実行する

cd ~/aws-iot-device-client/build
./aws-iot-device-client --config-file ~/dc-configs/dc-testconn-config.json

Jetson nanoと超音波センサー(HC-SR04)で奥行きの距離を測定

前回はステレオビジョンで奥行きを測定したけど、今回は超音波センサー(HC-SR04)を使って奥行きを測定した。

個人的な備忘録なので、簡単にまとめてく。

目次
1. 配線の構成図
2. 超音波センサーで奥行き測定
3. ステレオビジョンと超音波センサーのメリットとデメリット

1. 配線の構成図

・jetson nano
・USBカメラ
・超音波センサー
・ブレッドボード

の4つを使って配線を組んだ。

GND => GND
Trig   => 19 番pin
Echo => 21番pin
VCC => 5 V(電源)


画像とセンサーの検出物体の位置に差が出ないように、超音波センサーとUSBカメラはなるべく近づけた。



2. 超音波センサーで奥行き測定

下のコードで測定(一部略)

import time
import RPi.GPIO as GPIO

class SensorDist():
    def __init__(self, hyp):
        self.__TRIG = hyp['TRIG_PIN'] # 19 pin
        self.__ECHO = hyp['ECHO_PIN'] # 21 pin
        GPIO.setwarnings(False)
        GPIO.setmode(GPIO.BOARD) 
        GPIO.setup(self.__TRIG,GPIO.OUT)
        GPIO.setup(self.__ECHO,GPIO.IN)
        self.sound_speed = hyp['sound_speed'] #[cm/s]
        
    def getDistance(self):
        GPIO.output(self.__TRIG, GPIO.LOW)
        # TRIG = HIGH
        GPIO.output(self.__TRIG, True)
        # after 0.01[s], TRIG = LOW
        time.sleep(0.01)        
        GPIO.output(self.__TRIG, False)

        signaloff=0
        signalon=0
        # signal start
        while GPIO.input(self.__ECHO) == 0:
            signaloff = time.time()
        # signal returned
        while GPIO.input(self.__ECHO) == 1:
            signalon = time.time()
        # dist calculation
        return (signalon - signaloff) * int(self.sound_speed /2)

    def __del__(self):
        GPIO.cleanup()

def run_camera(opt, hyp):
    sensor = SensorDist(hyp)
    cap = cv2.VideoCapture(0)
    i = 0 
    while True:
        ret, frame = cap.read()
        i+=1
        if i%opt.per_frames==0:
            distance = sensor_dist(sensor)
            texts = 'Distance: {} [cm/s]'.format(str(distance))
            cv2.putText(frame, texts, (10, 50), cv2.FONT_HERSHEY_SIMPLEX, 2, [255, 0, 0], thickness=3)
            frame = cv2.resize(frame, (500, 500))
            cv2.imshow('camera', frame)
            #print("{:.0f}cm".format(distance))
        if cv2.waitKey(1) & 0xFF == ord('q'):
            break
    cap.release()
    cv2.destroyAllWindows()
    del sensor

パフォーマンスとしてはかなり正確。


3. ステレオビジョンと超音波センサーのメリットとデメリット

超音波センサー

・メリット 
かなり精度がいい。カメラ一台なのでデバイスへの負荷が少ない

・デメリット
ソナーみたいにある一定領域内でしか検出できない(検出領域が狭い)。


ステレオビジョン

・メリット
検出領域が広い。カメラ画像と連動して距離を検出できるので、応用範囲が広い

・デメリット
LIDARや機械学習みたい代替手法の方がいいのでわざわざ使うメリットが少なくなってる。
カメラ2台使うからデバイスへの負荷がでかい
精度が低い


センサーはかなり正確で用途によってかなり使い勝手がいい。ステレオビジョンは代替手法がかなり多くなってきたからわざわざ使う必要なくなってる感じがした。




参考記事

超音波距離センサー(HC-SR04)をJetson Nanoで使用してみました

機械学習でC++の代わりにRustの開発環境構築

機械学習C++の代わりのコンパイル言語の『Rust』の開発環境を構築したのでその備忘録。

目次
1.ubuntuでJupyter notebook
2. opencvでカメラを使ってみる

1.ubuntuでJupyter notebook

# rustをubuntuにinstall
curl https://sh.rustup.rs -sSf | sh
source $HOME/.cargo/env
cargo --version
## cargo 1.67.1

# rust & cargo install
$ rustup install nightly
$ rustup default nightly

# Evcxr Jupyter のinstall
rustup component add rust-src
evcxr_jupyter --install
rustup component add rust-src
sudo apt install jupyter-notebook cmake build-essential
cargo install evcxr_jupyter
evcxr_jupyter --install

# 起動
jupyter notebook


うまく動いた。


use std::fmt::Debug;
pub struct Matrix<T> {pub values: Vec<T>, pub row_size: usize}
impl<T: Debug> Matrix<T> {
    pub fn evcxr_display(&self) {
        let mut html = String::new();
        html.push_str("<table>");
        for r in 0..(self.values.len() / self.row_size) {
            html.push_str("<tr>");
            for c in 0..self.row_size {
                html.push_str("<td>");
                html.push_str(&format!("{:?}", self.values[r * self.row_size + c]));
                html.push_str("</td>");
            }
            html.push_str("</tr>");
        }
        html.push_str("</table>");
        println!("EVCXR_BEGIN_CONTENT text/html\n{}\nEVCXR_END_CONTENT", html);
    }
}
let m = Matrix {values: vec![1,2,3,4,5,6,7,8,9], row_size: 3};

2. opencvでカメラを使ってみる

opencvを使ったrustのタスクをしてみる

# rustのtaskを作成
# binary で作る
cargo new --bin rust-opencv
cd rust-opencv

opencvのversionを調べる
googleから「carates.io」にアクセスしてopencvで検索。


最新のversionを確認。Cargo.tomlに追記

Cargo.toml

[package]
name = "rust-opencv"
version = "0.1.0"
edition = "2021"

# See more keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html

[dependencies]
opencv = "0.77.0"

main.rs

use opencv::{highgui, prelude::*, videoio, Result};
fn main() -> Result<()> {
	let window = "video capture";
	highgui::named_window(window, highgui::WINDOW_AUTOSIZE)?;
	let mut cam = videoio::VideoCapture::new(0, videoio::CAP_ANY)?; // 0 is the default camera
	let opened = videoio::VideoCapture::is_opened(&cam)?;
	if !opened {
		panic!("Unable to open default camera!");
	}
	loop {
		let mut frame = Mat::default();
		cam.read(&mut frame)?;
		if frame.size()?.width > 0 {
			highgui::imshow(window, &frame)?;
		}
		let key = highgui::wait_key(10)?;
		if key > 0 && key != 255 {
			break;
		}
	}
	Ok(())
}
## コンパイル
$ cargo run
#Compiling rust-opencv v0.1.0 (/home/parallels/rust-opencv)
#    Finished dev [unoptimized + debuginfo] target(s) in 4.24s
#     Running `target/debug/rust-opencv`
#[ WARN:0] global ./modules/videoio/src/cap_gstreamer.cpp (1100) open #OpenCV | GStreamer warning: Cannot query video position: #status=0, #value=-1, duration=-1


動いた。


参考

opencv-rust
evcxr
Jupyter で Rust を動かせる

レーザーシステムのレーザーの射出角度の設定方法の備忘録

ソフトウェアとハードウェアでカメラに写ったターゲットを自動追尾するレーザーシステムを作るので

ハードウェア側でレーザーで、下の2点を考えてレーザーの射出角度の設定方法を2つ考えた。

・なるべく正確にターゲットに照射する
・設定にコストがかからないようにする

解説を入れるとややこししいので図とコードで設定方法をざっとまとめいくただの個人的な備忘録。

目次
共通パラメーター
方法1. 視差(disparity)を使った4stepの設定方法
方法2. レーザーの射出角度とカメラのピクセル範囲をマッチングするように調整する3stepの設定方法(視差なし)


共通パラメーター

・画像サイズ:Height(縦)=720[pixel] Width(横)=1080[pixel]
・レーザー照射範囲(物体稼働領域) : 120[cm]
・物体までの距離(distance):130[cm]
・レーザー照射角度範囲のminとmax(θ):0〜180


方法1. 視差(disparity)を使った4stepの設定方法

Step
1.カメラとレーザーの位置はなるべく揃える
2.レーザー射出範囲の距離(cm)とそのピクセル数を図る
3.物体までの距離(dist)を計測する
4.公式から照射角度(ε)を求める


disparityから物体までの距離(z)を求めて、座標からレーザーの角度を求める

1.カメラとレーザーの位置はなるべく揃える

カメラとレーザー光がほぼ水平になるようにできる限り近づける。

2.レーザー射出範囲の距離(cm)とそのピクセル数を図る

レーザーで照射したい範囲の実寸とそのpixel数を図る。単位はcmが一般的で扱いやすいのでおすすめ。



3.物体までの距離(dist)を計測する

disparityを計算したら、カメラの画像素子と焦点距離を使った公式から物体までの距離を計算。


def distance_formula(disparity):
    T=2.6
    f = 0.315
    img_element = 0.00028
    K = int(T*f/img_element) #2925
    return K/disparity

4.公式から照射角度(ε)を求める
物体の移動距離(pixel)の単位をcmに直して、物体までの距離(z)からアークジェントを使って求める。

ここでは移動距離は5[pixel]とした。

def pixel2cm(pix):
    object_size = 120
    pixel = 900
    cm = pix * (object_size/pixel)
    return cm *100

	
def calcurate_Xangle(pix, dist):
    x = pixel2cm(pix)
    sita = np.arctan(x/dist)
    return sita
 

dist = disranse_formula(disparity)
#dist= 130
sita = calcurate_Xangle(pix=5, dist=130)
# sita(angle) is 0.47

コード全体

import os
import numpy as np

def pixel2cm(pix):
    object_size = 120
    pixel = 900
    cm = pix * (object_size/pixel)
    return cm *100

# 3
def distance_formula(disparity):
    T=2.6
    f = 0.315
    img_element = 0.00028
    K = int(T*f/img_element) #2925
    return K/disparity
	
# 4
def calcurate_Xangle(pix, dist):
    x = pixel2cm(pix)
    sita = np.arctan(x/dist)
    return sita
	

dist = disranse_formula(disparity)
#dist= 130
sita = calcurate_Xangle(pix=5, dist=130)
# sita(angle) is 0.47
# this is the angle for laser


方法2. レーザーの射出角度とカメラのピクセル範囲をマッチングするように調整する3stepの設定方法(視差なし)

備考
・カメラとレーザーの位置を揃えなくていい
・レーザー照射範囲の長さを計測しなくていい

Step
1.照射範囲に対してレーザーの照射角度(θ)を設定する
2.レーザーの照射角度と画像内のpixel範囲(ε)を合わせる
3.物体の位置座標と公式から照射範囲内のレーザー照射角度(ε =θ)を求める


1.照射範囲に対してレーザーの照射角度(θ)を設定する

pan_tilt.pyとかを使ってレーザーを実際に動かして照射範囲内に必要な角度を設定する。

2.レーザーの照射角度と画像内のpixel範囲(ε)を合わせる
1で設定した角度に画像内のpixel範囲を合わせる。

ちなみにpixelを角度に変換する公式は下のやつ。

class LaserAngleRange(Enum):
	XMin=80
	XMax=130
	YMin=70
	YMax=120
class AngleFormula(Enum):
    ServoMax = 180
    WidthMax = 1080
    HightMax = 720
	
def Xangle2duty(Xcoordinate):
    deg = Xcoordinate / (WidthMax/ServoMax)
    return deg


3.物体の位置座標と公式から照射範囲内のレーザー照射角度(ε =θ)を求める

レーザーの設定角度と画像内の角度が同じになるように、レーザーの設定角度範囲内で画像角度を求める。

def restricted_formula(degX):
	if degX>XMin or degX<XMax:
		#Yduty = int(float(degY)*2,17 + 102)
		Xdeg = degX
	else:
		pass
    return Xdeg
	
degX = Xangle2duty(Xcoord=700)
Xdeg = restricted_formula(degX)
# Xdeg is 116.66°

コード全体

class LaserAngleRange(Enum):
    XMin=80
    XMax=130
    YMin=70
    YMax=120
class AngleFormula(Enum):
    ServoMax = 180
    WidthMax = 1080
    HightMax = 720
	
def Xangle2duty(Xcoordinate):
    deg = Xcoordinate / (WidthMax/ServoMax)
    return deg

def restricted_formula(degX):
	if degX>XMin or degX<XMax:
		#Yduty = int(float(degY)*2,17 + 102)
		Xdeg = degX
	else:
		pass
    return Xdeg
	
degX = Xangle2duty(Xcoord=700)
Xdeg = restricted_formula(degX)
# Xdeg is 116.66° . This is laser angle

特に計算間違いはないし、このほかにもっといい計算方法があるだろうけど初手はこんな感じで。