We were shortlisted for the interview based on GATE score. There was no written test. The interviews were conducted on the 24th and 25th of May. There was a presentation before this where the HoD of the CSE department at IIT Madras explained everything about the MS and PhD programs and patiently fielded questions for about an hour.

They called me on my phone while the interview of the person before me was going on and told me to be ready. I joined in 10 minutes when they called again.

There was a panel of 6 professors out of which 3 interacted with me. The first professor asked for my introduction as is standard. The second professor shared a Codeshare link with me and began my interview. The question: write a function to find the top 3 (maximums) of a large array. The array is not pre-sorted and sorting is not allowed.

I asked if any time complexity requirements had to be heeded, and the prof asked me to go ahead and write a simple function. I wrote three loops, each to find the first, second and third maximums of the array. He expected me to have done it one loop, and started saying that I had a bug in my code, but then noticed the 3 loops and went “Oh. You’ve written three”.

He then asked me to do it in one loop. I thought for a bit and he gave me a hint: find the top 3 amongst the first 3 elements of the array and carry it on across the array. After a bit of light discussion, I wrote the right code.

After this, the third professor unmuted himself and entered the discussion. He asked me about my choice of linear algebra and I mentioned that I had selected graph mining as my research topic, which was under intelligent systems, so prepared linear algebra as the area needs it as a basic subject.

P: What topics do you know in linear algebra?
Me: Elimination, rank, subspaces of a matrix, vector spaces and subspaces, determinants, eigenvalues and eigenvectors…

(Unlike my IIT Bombay interview, this time I had come prepared)

P: Let's start with vector spaces. What is a vector space? What is a subspace?

Me: Defined both of them, explained their properties.

P: Is the zero vector part of a vector space?

We had a bit of back and forth on this, and then he asked me that since I mentioned that a vector space is described by the span of its basis, I could make the scalars $0$ and would get the zero vector, to which I said yes.

He mentioned how vector spaces have other properties and I suddenly remembered – mentioned distributivity, associativity, and he said yes.

P: Suppose we have a matrix equation $Ax = b$. $x$ is a column vector, $b$ is their product. What can you tell me about this?
Me: (reciting Gilbert Strang in my head) $Ax = b$ is solvable only when $b$ is in the column space of $A$.

P: Very good. So let’s suppose $b$ is not in the column space of $A$. We now want to solve an equation $A\hat{x} = p$ since $b$ is not there. We want to find a vector that approximately satisfies the requirement. What can you tell me about this situation? Is there a way?

Me: Explained about the concept of projections and derived the equations in chat.

P: Asked a bit more about this, and then asked – how do we know that $A^TA\hat{x} = A^Tb$ is solvable? How do we know the new vector is in the column space of $A$?

Me: Explained about how $b \ – A^T\hat{x}$ or something is in the nullspace of $A^T$ but muddled up what he was asking.

P: Gives a few more hints, asks what the null space of $A^TA$ is.

Me: It’s the same as that of $A$. But I wasn’t sure how that helped.

Me: Provided a full proof based on both nullspaces being subsets of each other, which proves that they're equal. He gave me a small hint here too.

P: Wanted to wrap up in interest of time but said “I'll ask you one more question”. Gave me a scenario where we have a large matrix whose eigenvalues we calculated. You give this matrix to a friend and he returns it to you with $1$ added to all the diagonal entries. Will you recalculate all the eigenvalues of the new matrix or is there some other way?

Me: Clarified what he was asking. Then mentioned that recalculation isn't needed. If you add $1$ to the diagonal, it's as if you did $A + I$ where $I$ is identity. And in that case, the eigenvalues increase by $1$.

P: Can you prove this?

Me: Let $Ax = \lambda.x$. Now $(A + I)x = Ax + Ix = Ax + x = \lambda.x + x = (\lambda + 1).x$.

P: Was satisfied.

I thanked everyone and the interview ended here. Around 45 minutes in total (according to the time they called me to join).
6
Like
0
Love
0
Haha
0
Wow
0
Angry
0