🙅‍🌶🐔

The moment you position yourself, you become exposed, and if you fall in that you are in trouble.

View the Project on GitHub

Welcome to GitHub Pages

You can use the editor on GitHub to maintain and preview the content for your website in Markdown files.

Whenever you commit to this repository, GitHub Pages will run Jekyll to rebuild the pages in your site, from the content in your Markdown files.

Markdown

Markdown is a lightweight and easy-to-use syntax for styling your writing. It includes conventions for

Syntax highlighted code block

# Header 1
## Header 2
### Header 3

- Bulleted
- List

1. Numbered
2. List

**Bold** and _Italic_ and `Code` text

[Link](url) and ![Image](src)

For more details see GitHub Flavored Markdown.

Jekyll Themes

Your Pages site will use the layout and styles from the Jekyll theme you have selected in your repository settings. The name of this theme is saved in the Jekyll _config.yml configuration file.

Support or Contact

Having trouble with Pages? Check out our documentation or contact support and we’ll help you sort it out.




Recent Posts

11 May 2018

初识GAN

1.

Generative adversarial networks (GANs),生成式对抗网络,一种新兴的半监督和无监督学习技术,通过隐式地给高维度的数据分布建立模型来实现这个目标。 于2014年提出,他们的特点是训练一对相互竞争的网络。一种适用于视觉数据的比喻是将一个网络想象成一个艺术造假者,另一个想成艺术专家。 在文献中被称为生成器(Generator)G的造假者创造伪造品,目的是制作逼真的图像。被称作判别器(Discriminator)D的专家,同时接收到伪造的和真实的图像,目的是将他们区别开。两者同时训练,彼此竞争。

最重要的是,生成器没有直接获取真实的图片,他唯一学习的方法是通过与判别器的相互影响。判别器可以同时获取合成的样本和从真正的图片堆里抽取的样本。 通过图片是来自真实的图片堆还是来自生成器的简单基本事实来提供给辨识器错误信号。通过辨识器,相同的错误信号可以被用来训练生成器,引导它产生更好的伪造品。

代表生成器和判别器的网络一般由卷积层或全连接层组成的多层网络来实现。生成器和判别器的网络必须是可微分的,尽管他们不需要直接可逆。

生成器网络可以视为从称为潜在空间的表示空间到数据空间的映射。在基本的GAN中,鉴别器网络可以被类似地表征为从图像数据到概率的映射的函数, 这个概率指的是图像来着真实数据的分布而不是生成器的分布的概率。 当固定生成器,判别器可以被训练来将图片分类为来自训练数据(即真的数据,接近1)或者来着固定的生成器(假的数据,接近0)。 当判别器达到最佳时,它被冻结,生成器可以继续被训练来降低判别器的的精度。如果生成器的分布能完美匹配真实数据的分布,那么判别器将被最大程度被混淆,对所有输出预测为0.5。
View Comments



09 May 2018

我在运行tf-faster-rcnn时遇到的错误

这次就不用蹩脚的英文了。讲讲我在运行endernewton大佬的代码时候遇到的错误,尽管很多都在他的issue里可以找到解决方法,但是没人整理啊,也没有中文呀——强行让自己写的东西有意义。

提示

该程序训练自动执行测试验证脚本。重复执行训练脚本无妨,因为有存档点,每5000次存一次。total 70000 iters。所以,训练完后执行训练脚本等于直接测试。

1. bash变量赋值和time指令在同一行导致

./experiments/scripts/train_faster_rcnn.sh: line 73: time: command not found

解决方法

修改./experiments/scripts/train_faster_rcnn.sh的62行和72行(在time前添加&&),改变为:

CUDA_VISIBLE_DEVICES=${GPU_ID} && time python ./tools/trainval_net.py \

P.S.同样的问题在test_faster_rcnn.sh中的58行和67行也存在,解决方法相同 。否则,你会在测试的时候遇到一样的问题。

2. 忘了安装Python COCO API

ImportError: No module named 'pycocotools'

总之,不是不跑coco数据就不必安装,记得安装就好

解决方法

cd data
git clone https://github.com/pdollar/coco.git
cd coco/PythonAPI
make
cd ../../..

3. 显存资源耗尽

ResourceExhaustedError (see above for traceback): OOM when allocating tensor with shape[512,512,3,3]

这是发生在测试的时候。一般说来,你在修改安装脚本的时候选了对应合适的架构(我用的是GTX 1080 (Ti),对应sm_61),理应不会发生这种情况的。有个用1070的老哥训练时出现这个情况,跑去找作者提issue#286,作者直接建议换个好卡😜。因为gpu的占用会随不同尺寸的图像而改变。

至于我在测试的时候遇到这个问题,有两个可能的原因。一、我是租用的3块一小时的共享GPU云,显存被别人占用了?我觉得这个原因比较小,因为云提供商说的是,后面的任务会顶掉前面的任务。二、有趣的来了。这个小姐姐?的issue#7也是在测试的时候遇到内存不够,分配错误的问题,但她说她没有用GPU干其他别的任何事情。有趣的是,小姐姐在同一天关闭了issue并tensorflow官方git下的issue#5816下面引用了这个issue。小姐姐自己认为是Jupyter notebook开着的原因导致的。事实上不是,我一直开着的,最后也成功了。感谢小姐姐带我到issue#5816这里。妈的,官方这么回复的:

It’s possible to have non-deterministic “out of memory” because you are unlucky. TensorFlow has non-deterministic order of execution, so depending on timing, you may have things scheduling in different order and needing different amounts of memory.

什么意思呢?TensorFlow的执行顺序具有不确定性。这是个bug,导致——如果你遇到”out of memory”就是你太霉了,该去去晦气了——找两条锦鲤来转发吧。当然,我们在自己写的时候可以用tensorflow.contrib.graph_editor来避免这种不确定性。

解决方法

多运行几遍,总有一次可以成功的。我第一次测试就遇到两次😓,之后因为其他错误重新运行还遇到过一次。

4. 在记录每个类目测试数据的时候找不到文件

FileNotFoundError: [Errno 2] No such file or directory: '/root/tf-faster-rcnn/data/VOCdevkit2007/results/VOC2007/Main/

找不到目录就建立一个呗

解决方法

参照issue#246

mkdir -p /root/tf-faster-rcnn/data/VOCdevkit2007/results/VOC2007/Main/

多一嘴,记得/root对应你放代码的位置,/tf-faster-rcnn但愿你没改名字。

5. 代码中的类型错误

TypeError: write() argument must be str, not bytes

解决方法

参照issue#261 修改tf-faster-rcnn/lib/datasets/voc_eval.py文件的121行,把’b’改为’wb’:

with open(cachefile,'wb') as f

正如issue261中所提的,还有地方需要修改,不然你会遇到issue#171的问题。不知道你听不听劝,我听了,按如下修改:

修改同一个文件的第105行,改为:

cachefile = os.path.join(cachedir, '%s_annots.pkl' % imagesetfile.split("/")[-1].split(".")[0])

至此,我目前遇到的所有问题解决了。

View Comments



08 May 2018

Some Tricks About Sh

Some Tricks About Bash/Zsh/Fish…

Yestorday, I learned some tricks about *sh when I try to run my program. The tricks I used are as follows:

1. wc which means Word Count:

wc -l which gives the number of lines in the standard output:

some_command_that_can_be_output_by_line | wc -l 

For example:

  • Input: ls -l | wc -l
  • Ouput: 5 the total number of the files in the current directory

P.S. When using upper case…em…I mean using wc -L which you can get the number of bytes in the longest row. It’s true, I’ve tried.😂

2.

tail -n 14 nohup.out | grep -A 7 "iter:"

Looks a bit complicated. Don’t worry. I will explain one by one:

tail FILE

You can use man tail to see:

print the last 10 lines of each FILE to standard output.

Ugh… it seems that I didn’t explain anything except copying the manual. I remain indifferent to it. just for fun. Don’t say something useless, continue.

tail -n NUM FILE

output the last NUM lines.

Of course, you can use it like this ls -l | tail -n 1 which print the last line of this command ls -l output.

P.S. The usage of head is the same, except that the output is the first few lines of the file. for instance:

  • Input: ls -l | tail -n 1
  • Output: drwxr-xr-x 7 tipsy 224 5 7 14:43 LabelVOC
grep -A NUM PATTERN

A means After Context, so you can also use --after-context=NUM instead.

Print NUM lines of trailing context after matching lines.

grep -B NUM PATTERN

B means Before Context. It can instead of --before-context. Yep, similar to above.

Print NUM lines of leading context before matching lines.

grep -C NUM PATTERN

C means Context. I don’t wanna say more. It just makes the above two commands work together.

Print NUM lines of output context.

3.

nohup unzip /dir/zipfile.zip -o -d /dir_you_want > unzip.info.txt 2>&1 &

hah, more complicated than last one.

nohup

one of the shell buildin commands which maybe means no hungup. it sets the signal SIGHUP to be ignored. As a result, it will not be terminated when you logout from ssh. In short:

Run COMMAND, ignoring hangup signals.

If standard output is a terminal, append output to ‘nohup.out’ if possible, ‘$HOME/nohup.out’ otherwise. If standard error is a terminal, redirect it to standard output.

unzip /dir/zipfile.zip -o

can creates ZIP archives. -o means overwrite existing files without promping. -d /dir_you_want: An optional directory to which to extract files. By default, will be created in the current directory. ```

> redirected_standard_output_file

It redirects standard output to this file, overwriting the file.

P.S. >> will not overwrite instead of appending the redirected output at the end.

2>&1

seems mystery, but not at all. 2 and 1 just are two of file descriptor. A file descriptor is a non-negative integer. 2 represents standard error, while 1 represents standard output. the last number 0 represents standard input. the ‘&’ tells system ‘1’ represents standard output rather than file named ‘1’.

Writing just 2>1 would redirect the standard error to a file called “1”, not to standard output.[2]

P.S. 2>&1 call the dup2(1,2), I could not tell you more, search it if interest.

&

the last symbol which can start the program as a background job.

P.S. fg foreground. It can bring a background job back to the foreground. (but if you’ve redirected output you won’t see much.)[3]

TO BE CONTINUE

These days, I was exhausted. wanan.

References:

  1. IBM developerWorks

  2. sergut On unix.stackexchange.com

  3. cas On serverfault.com

View Comments



06 May 2018

🌶️🐔

2018-05-06 10 AM

博客初成。

2018-05-10 9 PM

测试视频插入:

View Comments



05 May 2018

test

Text can be bold, italic, or strikethrough.

Link to another page.

There should be whitespace between paragraphs.

There should be whitespace between paragraphs. We recommend including a README, or a file with information about your project.

Header 1

This is a normal paragraph following a header. GitHub is a code hosting platform for version control and collaboration. It lets you and others work together on projects from anywhere.

Header 2

This is a blockquote following a header.

When something is important enough, you do it even if the odds are not in your favor.

Header 3

// Javascript code with syntax highlighting.
var fun = function lang(l) {
  dateformat.i18n = require('./lang/' + l)
  return true;
}
# Ruby code with syntax highlighting
GitHubPages::Dependencies.gems.each do |gem, version|
  s.add_dependency(gem, "= #{version}")
end

Header 4

  • This is an unordered list following a header.
  • This is an unordered list following a header.
  • This is an unordered list following a header.
Header 5
  1. This is an ordered list following a header.
  2. This is an ordered list following a header.
  3. This is an ordered list following a header.
Header 6
head1 head two three
ok good swedish fish nice
out of stock good and plenty nice
ok good oreos hmm
ok good zoute drop yumm

There’s a horizontal rule below this.


Here is an unordered list:

  • Item foo
  • Item bar
  • Item baz
  • Item zip

And an ordered list:

  1. Item one
  2. Item two
  3. Item three
  4. Item four

And a nested list:

  • level 1 item
    • level 2 item
    • level 2 item
      • level 3 item
      • level 3 item
  • level 1 item
    • level 2 item
    • level 2 item
    • level 2 item
  • level 1 item
    • level 2 item
    • level 2 item
  • level 1 item

Small image

Octocat

Large image

Branching

Definition lists can be used with HTML syntax.

Name
Godzilla
Born
1952
Birthplace
Japan
Color
Green
Long, single-line code blocks should not wrap. They should horizontally scroll if they are too long. This line should be long enough to demonstrate this.
The final element.
View Comments



Older Posts

初识GAN 11 May 2018 Comments
我在运行tf-faster-rcnn时遇到的错误 09 May 2018 Comments
Some Tricks About Sh 08 May 2018 Comments
🌶️🐔 06 May 2018 Comments
test 05 May 2018 Comments

Acknowledgments