![]() ![]() ![]() I don’t want to fear anything? And, of course, still for Lenz. Io non voglio temere niente? Come no, ancora per Lenz.” I started my tests by throwing in the process a large text file in English that I found on Peter Norvig’s website ( ) and I end up, thanks to the help of Priscilla (a clever content writer collaborating with us), “ resurrecting ” David Foster Wallace with its monumental Infinite Jest (provided in Italian from Priscilla’s ebook library and spiced up with some of her random writings).Īt the beginning of the training process – in a character by character configuration – you can see exactly what the network sees: a nonsensical sequence of characters that few epochs (training iteration cycles) after will transform into proper words.Īs I became more accustomed to the training process I was able to generate the following phrase: This library is developed on top of TensorFlow and makes it super easy to experiment with Recurrent Neural Network for text generation.īefore looking at generating keywords for our client I decided to learn text generation and how to tune the hyperparameters in textgenrnn by doing a few experiments.ĪI is interdisciplinary by definition, the goal of every project is to bridge the gap between computer science and human intelligence. Let’s get started: “Io sono un compleanno!”Īfter reading Andrej Karpathy’s blog post I found a terrific Python library called textgenrnn by Max Woolf. If you want to learn more in-depth on the mathematics behind recurrent neural networks and LSTMs, go ahead and read this article by Christopher Olah. In this way, LSTMs can overcome the vanishing gradient problem of traditional RNNs. The cell remembers values over a time interval and the three gates regulate the flow of information into and out of the cell much like a mini neural network. A common LSTM cell is made of an input gate, an output gate and a forget gate. An LSTM is designed in such a way that it can “ recall” that piece of information while processing the last sentence of the article and use it to infer, for example, that I speak Italian. Imagine a long article where I explained that I am Italian at the beginning of it and then this information is followed by other let’s say 2.000 words. The LSTM cell where each gate works like a perceptron. They are not limited to process only the current input, but also everything that they have perceived previously in time. What makes an RNNs “more intelligent” when compared to feed-forward networks, is that rather than working on a fixed number of steps they compute sequences of vectors. The unfair advantage of Recurrent Neural Networks Jump directly to the code: Interactive textgenrnn Demo w/ GPU for keyword generation And titles (or SEO titles), with a trained language model that takes into account what people search, could help us find the audience we’re looking for in a simpler way. While there is a large number of available tools for doing keyword research I thought, wouldn’t it be better if our client could have a smart auto-complete to generate any number of keywords in their semantic domain, instead than keyword data generated by us? The way a search intent (or query) can be generated, I also thought, is also quite similar to the way a title could be suggested during the editing phase of an article. Search marketers, copywriters and SEOs, in the last 20 years have been scouting for the right keyword to connect with their audience. One of our strongest publishers in the tech sector was asking us new unexplored search intents to invest on with articles and how to guides. From keyword research to keyword generationĪs usual with my AI-powered SEO experiments, I started with a concrete use-case. “The content of language is deeply hierarchical, reflected in the structure of language itself, going from letters to words to phrases to sentences to paragraphs to sections to chapters to books to authors to libraries, etc.”įollowing this hierarchical structure, new computational language models, aim at simplifying the way we communicate and have silently entered our daily lives from Gmail “Smart Reply” feature to the keyboard in our smartphones, recurrent neural network, and character-word level prediction using LSTM ( Long Short Term Memory ) have paved the way for a new generation of agentive applications. In a recent blog post by Google research scientist Brian Strope and engineering director Ray Kurzweil we read: One of the most fascinating features of deep neural networks applied to NLP is that, provided with enough examples of human language, they can generate text and help us discover many of the subtle variations in meanings.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |