Call for contributor(paper summary,dataset generation,algorithm implementation and any other useful resources)
A curated list of promising OCR resources
第三方和阿里自己提供的 API 集中在身份证、银行卡、驾驶证、护照、电商商品评论文本、车牌、名片、贴吧文本、视频中的文本，多输出字符及相应坐标，卡片类可输出成结构化字段，价格在0.01左右
OcrKing 源自2009年初 Aven 在数据挖掘中的自用项目，在对技术的执着和爱好的驱动下积累已近七载经多年的积累和迭代，如今已经进化为云架构的集多层神经网络与深度学习于一体的OCR识别系统2010年初为方便更多用户使用，特制作web版文字OCR识别，从始至今 OcrKing一直提供免费识别服务及开发接口，今后将继续提供免费云OCR识别服务。OcrKing从未做过推广，
但也确确实实默默地存在，因为他相信有需求的朋友肯定能找得到。欢迎把 OcrKing 在线识别介绍给您身边有类似需求的朋友！希望这个工具对你有用，谢谢各位的支持！
OcrKing 是一个免费的快速易用的在线云OCR平台，可以将PDF及图片中的内容识别出来，生成一个内容可编辑的文档。支持多种文件格式输入及输出，支持多语种（简体中文，繁体中文，英语，日语，韩语，德语，法语等）识别，支持多种识别方式， 支持多种系统平台， 支持多形式API调用！
Connectionist Temporal Classification is a loss function useful for performing supervised learning on sequence data, without needing an alignment between input data and labels. For example, CTC can be used to train end-to-end systems for speech recognition, which is how we have been using it at Baidu's Silicon Valley AI Lab.
Warp-CTC是一个可以应用在CPU和GPU上高效并行的CTC代码库 （library） 介绍 CTCConnectionist Temporal Classification作为一个损失函数，用于在序列数据上进行监督式学习，不需要对齐输入数据及标签。比如，CTC可以被用来训练端对端的语音识别系统，这正是我们在百度硅谷试验室所使用的方法。 端到端 系统 语音识别
Test mxnet with own trained model,用训练好的网络模型进行数字，少量汉字，特殊字符（./等）的识别（总共有210类）
An expandable and scalable OCR pipeline
OpenOCR makes it simple to host your own OCR REST API.
OCRmyPDF uses Tesseract for OCR, and relies on its language packs.
OwncloudOCR uses tesseract OCR and OCRmyPDF for reading text from images and images in PDF files.
Nextcloud OCR (optical character recoginition) processing for images and PDF with tesseract-ocr, OCRmyPDF and php-native message queueing for asynchronous purpose. http://janis91.github.io/ocr/
多标签分类,端到端的中文车牌识别基于mxnet, End-to-End Chinese plate recognition base on mxnet
SwiftOCR:Fast and simple OCR library written in Swift
Attention-OCR :Visual Attention based OCR
Added support for CTC in both Theano and Tensorflow along with image OCR example. #3436
Deep Embedded Clustering for OCR based on caffe
Deep Embedded Clustering for OCR based on MXNet
The minimum OCR server by Golang The minimum OCR server by Golang, and a tiny sample application of gosseract.
A comparasion among different variant of gradient descent algorithm This script implements and visualizes the performance the following algorithms, based on the MNIST hand-written digit recognition dataset:
A curated list of resources dedicated to scene text localization and recognition
Convolutional Recurrent Neural Network (CRNN) for image-based sequence recognition.
Implementation of the method proposed in the papers " TextProposals: a Text-specific Selective Search Algorithm for Word Spotting in the Wild" and "Object Proposals for Text Extraction in the Wild" (Gomez & Karatzas), 2016 and 2015 respectively.
Word Spotting and Recognition with Embedded Attributes http://www.cvc.uab.es/~almazan/index/projects/words-att/index.html
Part of eMOP: Franken+ tool for creating font training for Tesseract OCR engine from page images.
NOCR NOCR is an open source C++ software package for text recognition in natural scenes, based on OpenCV. The package consists of a library, console program and GUI program for text recognition.
An OpenCV based OCR system, base to other projects Uses Histogram of Oriented Gradients (HOG) to extract characters features and Support Vector Machines as a classifier. It serves as basis for other projects that require OCR functionality.
Recognize bib numbers from racing photos
Automatic License Plate Recognition library http://www.openalpr.com
Image Recognition for the Democracy Project with codes
Tools to be evaluated prior to integration into Newman
Text Recognition in Natural Images in Python
A generative vision model that trains with high data efficiency and breaks text-based CAPTCHAs
STN-OCR: A single Neural Network for Text Detection and Text Recognition
Digit Segmentation and Recognition using OpenCV and MLP test
ctpn based on tensorflow
ctpn based on caffe
A Python/OpenCV-based scene detection program, using threshold/content analysis on a given video. http://pyscenedetect.readthedocs.org
Implementation of the seglink alogrithm in paper Detecting Oriented Text in Natural Images by Linking Segments
Arbitrary-Oriented Scene Text Detection via Rotation Proposals
通过旋转候选框实现任意方向的场景文本检测 Arbitrary-Oriented Scene Text Detection via Rotation Proposals
Seven Segment Optical Character Recognition
SVHN yolo-v2 digit detector
Reads Scene Text in Tilted orientation.
ocr, cnn+lstm (CTPN/CRNN) for image text detection
A stand alone character recognition micro-service with a RESTful API
Single Shot Text Detector with Regional Attention
gocr is a go based OCR module
GOCR is an optical character recognition program, released under the
UFOCR (User-Friendly OCR). It is YAGF fork: https://github.com/andrei-b/YAGF Supported input format: PDF, TIFF, JPEG, PNG, BMP, PBM, PGM, PPM, XBM, XPM.
Building on recent advances in image caption generation and optical character recognition (OCR), we present a general-purpose, deep learning-based system to decompile an image into presentational markup. While this task is a well-studied problem in OCR, our method takes an inherently different, data-driven approach. Our model does not require any knowledge of the underlying markup language, and is simply trained end-to-end on real-world example data. The model employs a convolutional network for text and layout recognition in tandem with an attention-based neural machine translation system. To train and evaluate the model, we introduce a new dataset of real-world rendered mathematical expressions paired with LaTeX markup, as well as a synthetic dataset of web pages paired with HTML snippets. Experimental results show that the system is surprisingly effective at generating accurate markup for both datasets. While a standard domain-specific LaTeX OCR system achieves around 25% accuracy, our model reproduces the exact rendered image on 75% of examples.
We present recursive recurrent neural networks with attention modeling (R2AM) for lexicon-free optical character recognition in natural scene images. The primary advantages of the proposed method are: (1) use of recursive convolutional neural networks (CNNs), which allow for parametrically efficient and effective image feature extraction; (2) an implicitly learned character-level language model, embodied in a recurrent neural network which avoids the need to use N-grams; and (3) the use of a soft-attention mechanism, allowing the model to selectively exploit image features in a coordinated way, and allowing for end-to-end training within a standard backpropagation framework. We validate our method with state-of-the-art performance on challenging benchmark datasets: Street View Text, IIIT5k, ICDAR and Synth90k.
Clustering is central to many data-driven application domains and has been studied extensively in terms of distance functions and grouping algorithms. Relatively little work has focused on learning representations for clustering. In this paper, we propose Deep Embedded Clustering (DEC), a method that simultaneously learns feature representations and cluster assignments using deep neural networks. DEC learns a mapping from the data space to a lower-dimensional feature space in which it iteratively optimizes a clustering objective. Our experimental evaluations on image and text corpora show significant improvement over state-of-the-art methods
In recent years, recognition of text from natural scene image and video frame has got increased attention among the researchers due to its various complexities and challenges. Because of low resolution, blurring effect, complex background, different fonts, color and variant alignment of text within images and video frames, etc., text recognition in such scenario is difficult. Most of the current approaches usually apply a binarization algorithm to convert them into binary images and next OCR is applied to get the recognition result. In this paper, we present a novel approach based on color channel selection for text recognition from scene images and video frames. In the approach, at first, a color channel is automatically selected and then selected color channel is considered for text recognition. Our text recognition framework is based on Hidden Markov Model (HMM) which uses Pyramidal Histogram of Oriented Gradient features extracted from selected color channel. From each sliding window of a color channel our color-channel selection approach analyzes the image properties from the sliding window and then a multi-label Support Vector Machine (SVM) classifier is applied to select the color channel that will provide the best recognition results in the sliding window. This color channel selection for each sliding window has been found to be more fruitful than considering a single color channel for the whole word image. Five different features have been analyzed for multi-label SVM based color channel selection where wavelet transform based feature outperforms others. Our framework has been tested on different publicly available scene/video text image datasets. For Devanagari script, we collected our own data dataset. The performances obtained from experimental results are encouraging and show the advantage of the proposed method.
Recently, scene text detection has become an active research topic in computer vision and document analysis, because of its great importance and significant challenge. However, vast majority of the existing methods detect text within local regions, typically through extracting character, word or line level candidates followed by candidate aggregation and false positive elimination, which potentially exclude the effect of wide-scope and long-range contextual cues in the scene. To take full advantage of the rich information available in the whole natural image, we propose to localize text in a holistic manner, by casting scene text detection as a semantic segmentation problem. The proposed algorithm directly runs on full images and produces global, pixel-wise prediction maps, in which detections are subsequently formed. To better make use of the properties of text, three types of information regarding text region, individual characters and their relationship are estimated, with a single Fully Convolutional Network (FCN) model. With such predictions of text properties, the proposed algorithm can simultaneously handle horizontal, multi-oriented and curved text in real-world natural images. The experiments on standard benchmarks, including ICDAR 2013, ICDAR 2015 and MSRA-TD500, demonstrate that the proposed algorithm substantially outperforms previous state-of-the-art approaches. Moreover, we report the first baseline result on the recently-released, large-scale dataset COCO-Text.
Convert scanned images of documents into rich text with advanced Deep Learning OCR APIs. Free forever plans available.