本文共 3654 字,大约阅读时间需要 12 分钟。
简介:
Sugar Tensor aims to help deep learning researchers/practitioners. It adds some syntactic sugar functions to tensorflow to avoid tedious repetitive tasks. Sugar Tensor was developed under the following principles:
原文链接:
2.【代码】illustration2vec
简介:
illustration2vec (i2v) is a simple library for estimating a set of tags and extracting semantic feature vectors from given illustrations. For details, please seeor.
原文链接:
3.【资料】30 Top Videos, Tutorials & Courses on Machine Learning & Artificial Intelligence from 2016
简介:
2016 has been the year of “Machine Learning and Deep Learning”. We have seen the likes of Google, Facebook, Amazon and many more come out in open and acknowledge the impact machine learning and deep learning had on their business.
Last week, I published. I was blown away by the response. I could understand the response to some degree – I found these videos extremely helpful. So, I decided to do a similar article on top videos on machine learning from 2016.
原文链接:
4.【博客】20+ hottest research papers on Computer Vision, Machine Learning
简介:
Computer Vision used to be cleanly separated into two schools: geometry and recognition. Geometric methods like structure from motion and optical flow usually focus on measuring objective real-world quantities like 3D “real-world” distances directly from images and recognition techniques like support vector machines and probabilistic graphical models traditionally focus on perceiving high-level semantic information (i.e., is this a dog or a table) directly from images.
原文链接:
5.【课程】Neural Networks and Deep Learning
简介:
Neural networks have enjoyed several waves of popularity over the past half century. Each time they become popular, they promise to provide a general purpose artificial intelligence–a computer that can learn to do any task that you could program it to do. The first wave of popularity, in the late 1950s, was crushed by theoreticians who proved serious limitations to the techniques of the time. These limitations were overcome by advances that allowed neural networks to discover distributed representations, leading to another wave of enthusiasm in the late 1980s. The second wave died out as more elegant, mathematically principled algorithms were developed (e.g., support-vector machines, Bayesian models). Around 2010, neural nets had a third resurgence. What happened over the past 20 years? Basically, computers got much faster and data sets got much larger, and the algorithms from the 1980s–with a few critical tweaks and improvements–appear to once again be state of the art, consistently winning competitions in computer vision, speech recognition, and natural language processing. Below is a comic strip circa 1990, when neural nets reached public awareness. You might expect to see the same comic today, touting neural nets as the hot new thing, except that now the field has been rechristened deep learning to emphasize the architecture of neural nets that leads to discovery of task-relevant representations.
原文链接:
转载地址:http://gpdqb.baihongyu.com/