How to tackle lack of data: an overview on transfer learning
How to tackle lack of data: an overview on transfer learning using examples like BERT and GPT and by understanding further concepts of it.
Data Science Intern at DATANOMIQ.
Majoring in computer science. Currently studying mathematical sides of deep learning, such as densely connected layers, CNN, RNN, autoencoders, and making study materials on them. Also started aiming at Bayesian deep learning algorithms.
How to tackle lack of data: an overview on transfer learning using examples like BERT and GPT and by understanding further concepts of it.
Do you know how AI is really used in the gaming industry? I will introduce the many many ways how this is done. From AI players to textures.
Some of you might have got away with explaining reinforcement learning (RL) only by saying an obscure thing like “RL enables computers to learn through trial and errors.” But if you have patiently read my articles so far, you might have come to say “RL is a family of algorithms which simulate procedures similar to dynamic programming (DP).”
This is the third article of the series My elaborate study notes on reinforcement learning. 1, Some excuses for writing another article on the same topic In the last article […]
This is the second article of the series My elaborate study notes on reinforcement learning. *I must admit I could not fully explain how I tried visualizing ideas of Bellman […]
This is the first article of my article series “My elaborate study notes on reinforcement learning.” *I adjusted mathematical notations in this article as close as possible to “Reinforcement Learning:An […]
This article is going to be composed of the following contents.
Understanding the “simplicity” of reinforcement learning: comprehensive tips to take the trouble out of RL
Graphical understanding of dynamic programming and the Bellman equation: taking a typical approach at first
*This is the fourth article of my article series “Illustrative introductions on dimension reduction.” 1 Our expedition of eigenvectors still continues This article is still going to be about eigenvectors […]
If you have been patient enough to read the former articles of this article series Instructions on Transformer for people outside NLP field, but with examples of NLP, you should […]
This is the fourth article of my article series named “Instructions on Transformer for people outside NLP field, but with examples of NLP.” 1 Wrapping points up so far This […]