christina.kim
  • About
  • Calendly

Scaling Laws for Language Transfer Learning

Building upon OpenAI’s recent work on scaling laws, my project explores how much pre-training on English helps when transferring across different languages. Here, I will discuss scaling laws discovered while fine-tuning across different languages with...

11 Apr 2021

Experimenting in an Infinite Data Regime

Most machine learning tutorials gear toward defined datasets that can fit in the memory of most machines. These datasets are great for benchmarking new algorithms and for learning. However, newer SOTA models have many more...

12 Feb 2021

Scaling Laws

My research direction for the OpenAI Scholar’s program is heavily influenced by the scaling laws paper by OpenAI on language models, and autoregressive models published last year. Scaling laws exist for cross-entropy loss in five...

29 Jan 2021

Keeping Things Regular

In this post, I will introduce the direction of my OpenAI scholars’ project. Inspired by the Scaling Laws papers by OpenAI on language models, and autoregressive models published last year, I’m interested in learning more...

15 Jan 2021

How to Navigate Conferences

The state of machine learning research moves incredibly fast. There are dozens of new papers published on arxiv every day, and it’s overwhelming trying to keep up to date. Conferences are a great way to...

04 Dec 2020

Research Tools

One of the hardest things from switching from software engineering to research has been the open-ended nature of research and measuring progress. For me, software engineering usually has more explicit objectives, and it’s easy to...

20 Nov 2020

Transformers, Roll Out!

I’ve spent a good majority of my time (when not constantly refreshing for election results) thinking about transformers. There are already many articles describing transformers and implementing them, so I won’t go into too much...

06 Nov 2020

Hello from OpenAI

I’m excited to be joining the Fall 2020 cohort of OpenAI’s Scholars program. I’ll be writing semi-regularly as a log of what I’m learning and thinking about. I’m excited to be part of the scholars’...

23 Oct 2020

The NLP Papers to Read Before ICLR 2020

Ahead of next week’s ICLR 2020 virtual conference, I went through the 687 accepted papers (out of 2594 submitted - up 63% since 2019!) and identified 9 papers with the potential to advance the use of deep...

23 Apr 2020

Grounded Language Learning

04 Mar 2020
Next