Bhaskar's Blog

Paper Walkthrough: Learning Equilibria of Simulation Based Games - Part 2

This is part 2 of my Paper Walkthrough of the paper "Learning Equilibria in Simulation Based Games" by Enrique Areyan Viqueria, Cyrus Cousins, Eli Upfal, and Amy Greenwald [1]. If you have not read part 1, please go back and read it first. I will begin this part by doing a small recap of what has been covered so far...

Paper Walkthrough: Learning Equilibria of Simulation Based Games - Part 1

Over the past year, I’ve developed a strong interest in the research area of Algorithmic Game Theory, specifically areas concerned with computational decision making. In learning about topics from this area, I have been working through a very large body of literature. Through this series of “Paper Walkthroughs,” I hope to deepen my understanding of some papers that interest me and to potentially produce some useful content that can be of help to others.

In Loving Memory of my Naana

Last night my grandfather from my mother’s side passed away. My mom, my brother, and I were just coming back from a great hiking trip when we got the news, and my family has been absolutely devastated. Though...

SVM Implementation using CVXOPT - Python

... After developing somewhat of an understanding of the algorithm, my first project was to create an actual implementation of the SVM algorithm. Though it didn't end up being entirely from scratch as I used CVXOPT to solve the convex optimization problem, the implementation helped me better understand how the algorithm worked and what the pros and cons of using it were. In this post, I hope to walk you through that implementation. Note that this post assumes an understanding of the underlying math behind SVMs. If you feel uncomfortable on that front, I would again recommend checking out the resources linked above.

Lagrangian Multipliers - An Intuitive Explanation

Lagrangian Multipliers is a concept commonly taught in Calculus III classes. It’s an amazing mathematical tool that allows you to find the local extrema of any function under a constraint. While it is an incredibly useful tool to have in a mathemetician’s toolkit, in my experience, very few students actually understand how it works. Yes, you set the gradient of the function you’re optimizing equal to some multiplier times the gradient of your constraint, and solving the resulting system gives you the local extrema. But why does that work? In this post, I hope to provide some intuition to answer that question.

Page: 1 of 2