Recent posts in Engineering Mathematics


The webpage of Hossein Pishro-textbook Nik's Introduction to Probability, Statistics, and Random Processes. It is an open access peer-reviewed textbook designed for undergraduate and first-year graduate level courses on the subject. This probability textbook may be utilised by both students and practitioners in engineering, mathematics, finance, and other relevant subjects.

Mohitdas posted in Probability Dec 30, 2021

​​​​​​1. Overview

In this article, we’ll discuss the geometric probability distribution and its properties. We will prove each property mathematically and understand its significance. This article assumes basic knowledge of discrete mathematics and algebra for proofs.

2. Basic Definitions

We begin with a few basic definitions that will set the stage for things to come.

Sample space: A collection or set of discrete points $\Omega$ that is countable. A set of values $S$ is said to be countable when there exists a bijection $f: S → A$ where $A$ is a known countable set, such as the set of integers. Each point in the sample space represents an outcome.

A sample space may also be made of a continuous interval of points, but that is outside the scope of this article.

Conditional probability: Given events $A$ and $B$ as subsets of $\Omega$, the value $P(A | B)$ is the probability that $A$ occurs given that $B$ has already occurred. It is the probability of $A$ scaled down to the conditional universe of $B$.

Discrete random variable: A random variable $X$ is a function $g: \Omega → A$ where $A$ is a subset of $\mathbb{R}$ and $\Omega$ is the sample space. Think of $X$ as taking on input values in $\Omega$ with a certain probability and providing a corresponding meaningful output. For example: we have a set of students and we want to measure their weights. The set of students is our sample space $\Omega$. $W$ could be a random variable that takes any student in this set as input, and outputs a weight value associated with the student. The more frequently a weight value is seen, the more probability it has of occurring.

Probability mass function (PMF): A function $f: A → [0, 1]$ where $A$ is the subset of values that $X$ takes on, as given above. We usually use the notation $P(X = x)$ where $x\ \epsilon\ A$.

Cumulative distribution function (CDF): In terms of our notation, it is the function $P(X \le x)$. 

A random variable $X$ is geometric if it adheres to the properties of the geometric distribution.

3. The Geometric Probability Mass Function

A geometric random variable $X$ represents the number of trials it takes for an event of interest to occur. Each trial is independent and has no bearing on the others. $X$ takes on the values $1$, $2$, $3$ and so on.

For example, we might want to measure the number of times a coin is tossed until a head appears. Say the heads outcome occurs with probability $p$. This implies that the only other outcome, the tails, happens with probability $1\ – p$. The geometric PMF then takes the form -:

$P(X = x) = (1\ – p)^{x\ – 1}p$

This means that $x\ – 1$ trials failed before the success occurred. Visually, this looks like a downward sloping set of discrete points on the XY plane. The sample space is plotted on the X-axis and the probability of each point is plotted on the Y-axis. The height of a point in this plot tells us how probable it is, and these heights decrease in the form of a geometric progression.

A more contrived example could involve a game with a number of outcomes. One of the outcomes signifies the end of the game. The others force the game to repeat. We are to measure the number of rounds the game takes before it completes.

4. The Geometric Cumulative Distribution Function

The CDF gives us a convenient measure of the probability of input points up to a given baseline $x$. For geometric distributions, it takes on the form -:

$P(X \le x) = 1\ – (1\ – p)^x$

A proof of the above fact follows from taking the summation -:

$ P(X \le x) = \sum_{k = 1}^{x}(1\ – p)^{k\ – 1}p$

                   $ = p\sum_{k = 1}^{x}(1\ – p)^{k\ – 1}$

                   $ = p\frac{1\ –\ (1\ –\ p)^{x\ – 1 + 1}}{1\ –\ (1\ –\ p)}$     [geometric progression]

                   $ = 1\ – (1\ – p)^x$         [cancel out $p$]

5. Expected Value of a Geometric Random Variable

The expectation of a random variable $E[X]$ is fundamentally a weighted average. Each real value in the output set is weighted by the probability that it occurs. If more points in the sample space correspond to a particular output, that output has a greater weightage. It is also called the mean of the distribution. It can be described by the general formula -:

$E[X] = \sum_{k = a}^{b}k\ P(X = k)$

The expectation of a geometric random variable in particular looks like -:

$E[X] = \sum_{k = 1}^{\infty}k\ (1\ – p)^{k\ – 1}p$

This summation has a closed form which we can extract through the use of calculus. We describe this manipulation below. First, note that the sum of an infinite geometric progression can be described as -:

$\sum_{n = 0}^{\infty}x^n = \frac{1}{1\ –\ x}$

Differentiate on both sides with respect to $x$ to get -:

$\sum_{n = 1}^{\infty}nx^{n\ – 1} = \frac{1}{(1\ –\ x)^2}$

Using the above result, we proceed as follows -:

$E[X] = p\sum_{k = 1}^{\infty}k\ (1\ – p)^{k\ – 1}$

           $ = p\frac{1}{(1\ –\ (1\ –\ p))^2}$

           $ = p\frac{1}{p^2}$

           $ = \frac{1}{p}$

6. Variance of a Geometric Random Variable**

The variance of a random variable $Var(X)$ measures the spread of the distribution. It represents the expected average square distance of $X$ from the mean. In general,

$Var(X) = E[(X\ – \mu)^2] = E[(X\ – E[X])^2]$

The variance can also be calculated using the formula -:

$Var(X) = E[X^2]\ – (E[X])^2$

In the case of the geometric distribution, $E[X] = \frac{1}{p}$ and $(E[X])^2 = \frac{1}{p^2}$. It remains to calculate $E[X^2]$. We do this by algebraic manipulation and application of linearity of expectation as follows -:

$E[X^2] = E[X^2\ – X + X] = E[X(X\ – 1) + X] = E[X(X\ – 1)] + E[X]$.

Further, let $1\ – p = q$. Note that $X(X\ – 1)$ is itself a random variable, as it is the function of a random variable. Therefore, it makes sense to be able to calculate its expectation.

$E[X(X\ – 1)] = p\sum_{x = 1}^{\infty}x\ (x\ – 1)\ q^{x\ – 1}$

This is equivalent to writing -:

 $E[X(X\ – 1)] = p\frac{d}{dq}\left (\sum_{x = 1}^{\infty}(x\ – 1)\ q^x \right )$

                         $ = p\frac{d}{dq}\left (q^2\ \sum_{x = 2}^{\infty}(x\ – 1)\ q^{x\ – 2}\right )$

                         $ = p\frac{d}{dq}\left (q^2\ \frac{d}{dq}\left (\sum_{x = 2}^{\infty}q^{x\ – 1}\right )\right )$

                         $ = p\frac{d}{dq}\left (q^2\ \frac{d}{dq}\left (\sum_{x = 1}^{\infty}q^{x}\right )\right )$

                         $ = p\frac{d}{dq}\left (q^2\ \frac{d}{dq}\left (\frac{1}{1\ –\ q}\ – 1\right )\right )$

                         $ = p\frac{d}{dq}\left (q^2\ \left (\frac{1}{(1\ –\ q)^2}\right )\right )$

                         $ = p\ \frac{2q}{(1\ –\ q)^3}$

                         $ = p\ \frac{2(1\ –\ p)}{(1\ –\ (1\ –\ p))^3}$

                         $ = \frac{2\ –\ 2p}{p^2}$

Plugging in this result, we get -:

$Var(X) = E[X(X\ – 1)] + E[X]\ – (E[X])^2$

               $ = \frac{2\ –\ 2p + p\ –\ 1}{p^2}$

               $ = \frac{1\ –\ p}{p^2}$

7. Memorylessness of the Geometric Distribution

An interesting property inherent in the geometric distribution is its memorylessness. If we were to begin observing the experiment after a given number of trials, the remaining trials would follow the same distribution as the trials that have elapsed. It is as if we had started over!

Suppose we start observing after $a$ trials have elapsed, and a further $b$ trials occur. We claim that -:

$P(X \gt a + b | X \gt a) = P(X \gt b)$

To prove this, first note that -:

$P(A | B) = \frac{P(A\ \cap\ B)}{P(B)}$

So we have that -:

$P(X \gt a + b | X \gt a) = \frac{P(X\ \gt\ a + b\ \cap\ X\ \gt\ a)}{P(X\ \gt\ a)}$

But $X \gt a + b \cap\ X \gt a = X \gt a + b$

Therefore we have -:

$P(X \gt a + b | X \gt a) = \frac{P(X\ \gt\ a + b)}{P(X\ \gt\ a)}$

Here we may use the CDF to quickly calculate $P(X \gt x)$ style probabilities.

$P(X \gt x) = 1\ – P(X \le x) = (1\ – p)^x$

Thus we get -:

$P(X \gt a + b | X \gt a) = \frac{(1\ –\ p)^{a + b}}{(1\ –\ p)^a} = (1\ – p)^b = P(X \gt b)$

8. Conclusion

In this article, we discussed the various properties of the geometric distribution. We understood which situations can be modelled by this distribution. Further, we algebraically proved each property and underlined what they really mean.

pritishc posted in Probability Jun 17, 2021 edited Jun 17, 2021 by pritishc

Given an infinite sequence of numbers $<a_0, a_1, a_2, a_3, . . .>$, the (ordinary) generating function for the sequence is defined to be the power series:

$A(x) = \sum_{i=0}^{\infty}a_ix^i =  a_0 + a_1x + a_2x^2 + a_3x^3 + . . . $ 


Consider the binomial expansion of $(1-x)^n$, where $n\in \mathbb{Z}^+$

$(1-x)^n = \sum_{i=0}^{n}\binom{n}{i}(-x)^i$

$\quad = 1 - nx + \frac{n(n-1)}{2!}x^2 – \frac{n(n-1)(n-2)}{3!}x^3 \ldots +(-1)^k\frac{n(n-1)(n-2)\ldots (n-k+1)}{k!}x^k .\ldots + (-1)^nx^n $


Note that if we substitute $n$ with its negative value, the expansion fails to converge, i.e.

$(1-x)^{-n}  = 1 - (-n)x + \frac{(-n)(-n-1)}{2!}x^2 - \frac{(-n)(-n-1)(-n-2)}{3!}x^3  +\ldots $

$\qquad =1 + nx + \frac{n(n+1)}{2!}x^2 + \frac{n(n+1)(n+2)}{3!}x^3  +\ldots $

$\qquad = \sum_{i=0}^{\infty}\binom{n+i-1}{i}x^i$


(Expansion of $(1+x)^{-n}$ is the same, but with alternating signs. Try to derive it yourself)

In 99% of the cases, we’ll be dealing with $(1-x)^{-n}$ as far as GATE is concerned.


Applying the above formula for $n=-1$, we get

$(1-x)^{-1} = 1 + x + x^2 + x^3 + x^4 + . . .$

 (Notice that it’s a generating function for the sequence $<1,1,1,...>$)


Similarly for $n=-2$, we get

$(1-x)^{-2} = 1 + 2x + 3x^2 + 4x^3 + 5x^4 + . . .$

(Notice that it’s a generating function for the sequence $<1,2,3,4,5,...>$) 


**Operations on Generating Functions**

Let’s assume two generating functions:

$A(x) =  \sum_{i=0}^{\infty}a_ix^i $

$B(x) =  \sum_{i=0}^{\infty}b_ix^i $


  1. $\forall \alpha, \beta \in \mathbb{R}$,  $ \alpha A(x) + \beta B(x)$ is another generating function $C(x)$ where
$c_i = \alpha a_i + \beta b_i$,    $ \forall i \in \{0,1,2,3,...\}$ 



  1.  $A(x)\times B(x)$ is another generating function $D(x)$ where
$d_i = a_0b_i + a_1b_{i-1} + a_2b_{i-2} + . . . + a_ib_0$,    $ \forall i \in \{0,1,2,3,...\}$



  1. $A^2(x) = A(x) \times A(x)$ is another generating function $E(x)$ where
$e_i = a_0a_i + a_1a_{i-1} + a_2a_{i-2} + . . . + a_ia_0$,    $ \forall i \in \{0,1,2,3,...\}$



  1. $A(kx)$ is another generating function $F(x)$ where
$f_i = k^ia_i$,    $ \forall i \in \{0,1,2,3,...\}$



  1. $x^mA(x)$, $m\in\mathbb{N}$ is another generating function $G(x)$ where
$g_i = 0$,     $\forall i \in \{0, 1, 2, …, m-1\}$ 
$g_i = a_{i-m}$,    $\forall i \in \{m ,m+1, m+2, . . .\}$



  1. $(1-x)A(x) = A(x) – xA(x)$ is another generating function $H(x)$ where
$h_i = a_i – a_{i+1}$,    $\forall i \in \{0,1,2,3,...\}$



  1. $\frac{A(x)}{(1-x)} = A(x)[1+x+x^2+x^3+...] = A(x) + xA(x) + x^2A(x) + . . . $ is another generating function $J(x)$ where
$j_i = a_0 + a_1 + a_2 + … + a_i$,    $ \forall i \in \{0,1,2,3,...\}$



  1. $A’(x) = \frac{d}{dx}A(x)$ is another generating function $K(x)$ where  
$k_i = (i+1)a_{i+1}$,    $ \forall i \in \{0,1,2,3,...\}$


(**Note: Try deriving all the 8 operations above, pretty easy and mechanical)


Kind of questions that may be asked in GATE:


  1. Given something like $(a_0x^k + a_1x^{k+1} + a_2x^{k+2} + ...)^m$, find the coefficient of $x^p$, where $k,m,p, a_i \in \mathbb{N}, \forall i \in \{0,1,2,...\}$ 

Approach: If all the coefficients are $1$, take $x^k$ as common and bring it out, i.e.

$[x^k (1 + x + x^2 + x^3 + ...)]^m = x^{km}[1+x+x^2+x^3...]^m = x^{km}[\frac{1}{1-x}]^{m} = x^{km}(1-x)^{-m}$


Now $(1-x)^{-m}$ is simply $\sum_{i=0}^{\infty}\binom{m+i-1}{i}x^i$. Putting everything together, we just need to find coefficient of $x^p$ in the expression:


If we let $p = km + z$ where $z \in \mathbb{N}$, then the coefficient of $x^p$ is just the coefficient of $x^z$ in $\sum_{i=0}^{\infty}\binom{m+i-1}{i}x^i$ which is simply $\binom{m+z-1}{z} = \binom{m+p-km-1}{p-km}$


If not all the coefficients $a_i$ in $(a_0x^k + a_1x^{k+1} + a_2x^{k+2} + ...)^m$ are $1$, then like before, bring out $x^k$ and simply try to bring $(a_0 + a_1x + a_2x^2 + ...)$ into closed form (maybe use the “manipulation” technique as mentioned below), then raise it to power $m$ and rest of the calculation is pretty similar to the one I wrote above. Pretty time consuming and tedious stuff depending on what the coefficients are, but considering GATE, the expression should be a direct expansion of $x^k(1\pm x)^{-n}$ where $k,n \in \mathbb{Z}^+$



  1. Given an expression for calculating each coefficient of some generating function (i.e. each term of the infinite sequence), find the closed form of that generating function.

Approach: Check if the expression involves $’+’$ symbol. If it does, then the resulting generating function will be a sum of generating functions generated by each of those terms. For example, if $a_i = 7i + 6i^2$, then generating function of $A(x)$ is simply the sum of generating functions of the sequences $<(7\times 0), (7\times 1), (7\times 2), . . .>$ and $<(6\times 0^2), (6\times 1^2), (6\times 2^2), ...>$ respectively.

If there’s no $’+’$ symbol, then it’s simply the generating function generated by that single term. Here’s an example to make things concrete:


If each term of an infinite sequence $\{a_n\}$ is calculated as $a_i = 7i + 6i^2, \forall i \in \{0,1,2,3,...\}$, find the generating function generated by the sequence $\{a_n\}$ 

Ans: As stated above, if we assume $A(x)$ to be the generating function generated by the sequence $\{a_n\}$, then $A(x) = B(x) + C(x)$, where $B(x)$ and $C(x)$ are generating functions of the sequences $<(7\times 0), (7\times 1), (7\times 2), . . .>$ and $<(6\times 0^2), (6\times 1^2), (6\times 2^2), ...>$ respectively.


$B(x) = 0 + 7x + 14x^2 + 21x^3 + … = 7(1x + 2x^2 + 3x^3 + ….) = 7x(1+2x+3x^2+4x^3+...) = 7x(1-x)^{-2}$


Calculating $C(x)$ is a bit tricky.

$C(x) = 0 + (6\times 1^2)x + (6\times 2^2)x^2 + (6\times 3^2)x^3 + … = 6(0 + 1^2x + 2^2x^2 + 3^2x^3 + ...)$


Using the “manipuation” technique (more on it below),

$(1-x)^{-2} = 1 + 2x + 3x^2 + 4x^3 + ….$

$=> x(1-x)^{-2} = x + 2x^2 + 3x^3 + 4x^4 + ….$

$=> \frac{d}{dx}[x(1-x)^{-2}] = 1^2 + 2^2x + 3^2x^2 + 4^2x^3 + ….$

$=> x[\frac{d}{dx}[x(1-x)^{-2}]] = 1^2x + 2^2x^2 + 3^2x^3 + 4^2x^4 + ….$


Therefore, $C(x) = 6[x[\frac{d}{dx}[x(1-x)^{-2}]]]$. Solving it, we get $C(x) = 6x(1+x)(1-x)^{-3}$.

Putting everything together,


$A(x) = B(x) + C(x) = \frac{7x}{(1-x)^2} + \frac{6x(1+x)}{(1-x)^3}$

”Manipulation” technique (or what I like to call it)

Given a generating function $A(x) = \sum_{i=0}^{\infty}a_ix^i$, whose closed form is not immediately apparent, start off with a generating function $G(x)$ of the form $\frac{1}{(1\pm x)^n}$ (or even something more complicated) which looks kinda similar to that of $A(x)$. The idea is to manipulate $G(x)$ by a series of multiplication/differentiation/integration ..or whatever operations, to make the RHS of it similar to that of $A(x)$.


Here’s a quick example of what I meant. Consider $A(x) = 1^2 + 2^2x + 3^2x^2+ 4^2x^3 + ...$

It’s looks kinda similar to $(1-x)^{-2} = 1 + 2x + 3x^2 + 4x^3 + . . . $.

So we can manipulate it like this:

$=> (1-x)^{-2} = 1 + 2x + 3x^2 + 4x^3 + . . . $

$=> x(1-x)^{-2} = x + 2x^2 + 3x^3 + 4x^4 + . . . $

$=> \frac{d}{dx}[x(1-x)^{-2}] = 1 + 2^2x + 3^2x^2 + 4^2x^3 + . . . = A(x) $


Therefore $A(x) = \frac{d}{dx}[x(1-x)^{-2}]  = \frac{1+x}{(1-x)^3}$


(These were pretty much my notes on Generating Functions for GATE 2020. Although this should pretty much cover everything needed for GATE, do refer to explanations from books as well (I recommend ”Principles and Techniques in Combinatorics” by Chen Chuan Chong) and of course, practice the questions (a lot of them !!))

d3ba posted in Discrete Mathematics Mar 27, 2020 edited Jun 22, 2020 by d3ba
by d3ba

Hey there,

I’ve found an interactive website to revise some mathematics concepts.(For Graph Theory and Probability)

Here ‘s the link:


Harshada posted in Discrete Mathematics Jan 9, 2019

Information Collected from multiple sources(primarily from Wikipedia, Wolfram Alpha and Narsingh Deo Textbook)









Matching, covering and Independent Set





If there is exactly 2 vertices of odd deg then there is a path joins these 2 vertices

Every cut set of a connected graph G includes at least one branch from every spanning tree of G.

no of vertices of odd degree is always even

Every circuit has an even number of edges in common with any cut set

If K components, then at most

(V-K)(V-K+1)/2 edges

A graph with one or more vertices is at least 2-chromatic.


 A graph consisting of simply one circuit with |V|>=3 is 2-chromatic if n is even and 3-chromatic if n is odd.

Independence Number+Vertex Covering Number = |V|


If no isolated vertices there, then

Matching Number+Edge Covering Number = |V|


Matching No<=Vertex Cover No<=2*Matching No

Cycle graphs with an even number of vertices are bipartite




In a connected graph G with exactly 2k odd vertices, there exist k edge-disjoint subgraphs such that they together contain all edges of G and that each is a unicursal graph






An open walk that includes (or traces) all edges of a graph without retracing any edge is called a unicursal line or open Euler line. A connected graph that has a unicursal line is called a unicursal graph


There is one and only one path between every pair of vertices in a tree T

No of edges = |V| - 1

Every edge of a tree is a cut set






Every tree is bipartite



All vertices are even degree



Eulerian trail if and only if exactly zero or two vertices have odd degree

  Eulerian cycle if and only if decomposed into edge-disjoint cycles



All vertices with nonzero degree belong to a single connected component






A planar bipartite graph is dual to a planar Eulerian graph and vice versa


If |V|%2==1 and |V| >=3


No of edge disjoint Hamiltonian circuits =



The number of different Hamiltonian cycles in a complete undirected graph on V vertices is (V − 1)! / 2 and in a complete directed graph on n vertices is (V − 1)!. These counts assume that cycles that are the same apart from their starting point are not counted separately.

A Hamiltonian graph on V nodes has graph circumference V.

a complete graph with more than two vertices is Hamiltonian

every cycle graph is Hamiltonian

every tournament has an odd number of Hamiltonian paths

every platonic solid, considered as a graph, is Hamiltonian

the Cayley graph of a finite Coxeter group is Hamiltonian

An Eulerian graph G (a connected graph in which every vertex has even degree) necessarily has an Euler tour, a closed walk passing through each edge of G exactly once. This tour corresponds to a Hamiltonian cycle in the line graph L(G), so the line graph of every Eulerian graph is Hamiltonian. Line graphs may have other Hamiltonian cycles that do not correspond to Euler tours, and in particular the line graph L(G) of every Hamiltonian graph G is itself Hamiltonian, regardless of whether the graph G is Eulerian.


A strongly connected simple directed graph with n vertices is Hamiltonian if the sum of full degrees of every pair of distinct non-adjacent vertices is greater than or equal to 2n − 1.


A simple graph with n vertices has a Hamiltonian path if, for every non-adjacent vertex pairs the sum of their degrees and their shortest path length is greater than n.

All Hamiltonian graphs are biconnected, but a biconnected graph need not be Hamiltonian (see, for example, the Petersen graph)

tournament (with more than two vertices) is Hamiltonian if and only if it is strongly connected


Bondy–Chvátal theorem

A graph is Hamiltonian if and only if its closure is Hamiltonian.



All planar 4-connected graphs have Hamiltonian cycles, but not all polyhedral graphs do

If the sums of the degrees of nonadjacent vertices in a graph G is greater than the number of nodes N for all subsets of nonadjacent vertices, then G is Hamiltonian


A simple graph with n vertices (n ≥ 3) is Hamiltonian if every vertex has degree n / 2 or greater


A graph with n vertices (n ≥ 3) is Hamiltonian if, for every pair of non-adjacent vertices, the sum of their degrees is n or greater 


A strongly connected simple directed graph with n vertices is Hamiltonian if every vertex has a full degree greater than or equal to n.

Spanning Tree

Adding just one edge to a spanning tree will create a cycle; such a cycle is called a fundamental cycle (Fundamental circuits). There is a distinct fundamental cycle for each edge; thus, there is a one-to-one correspondence between fundamental cycles and edges not in the spanning tree. For a connected graph with V vertices, any spanning tree will have V − 1 edges, and thus, a graph of E edges and one of its spanning trees will have E V + 1 fundamental cycles.

A shortest spanning tree T for a weighted connected graph G with a constraint

Deg(Vi) for all vertices in T. for k=2, the tree will be Hamiltonian path

Every circuit of a connected graph G includes at least one link from every co spanning tree of G.







If the branches of the spanning tree T of a connected graph G are b1, . . . , bn−1 and the corresponding links of the co spanning tree T ∗ are c1, . . . , cm−n+1, then there exists one and only one circuit Ci in T + ci (which is the subgraph of G induced by the branches of T and ci)






Dual to the notion of a fundamental cycle is the notion of a fundamental cutset. By deleting just one edge of the spanning tree, the vertices are partitioned into two disjoint sets. The fundamental cutset is defined as the set of edges that must be removed from the graph G to accomplish the same partition. Thus, each spanning tree defines a set of       V – 1 fundamental cutsets, one for each edge of the spanning tree.



Emax =

|V| / 2


Only even length cycles


No of Perfect Matching in Kn,n = n!

Matching No = Vertex Cover No

If no isolated vertices there, then independence no = edge cover no

Min Vertex cover of Km,n = min(m,n)




E = |V|*





Number of Perfect Matching in K2n =

(2n)! / [2^n * n!]

And for

Kn = (2m-1)*(2m-3)*…*5*3*1

Where n=2m

Number of Perfect Matchings =

(|V| - 1) !



A directed graph has an Eulerian trail if and only if at most one vertex has (out-degree) − (in-degree) = 1, at most one vertex has (in-degree) − (out-degree) = 1, every other vertex has equal in-degree and out-degree


A directed graph has an Eulerian cycle if and only if every vertex has equal in degree and out degree

In a connected  isograph D of n vertices and m arcs, let W = (a1, a2,..., am) be an Euler line, which starts and ends at a vertex v (that is, v is the initial vertex of a1 and the terminal vertex of am). Among the m arcs in W there are n − 1 arcs that enter each of  n−1 vertices, other than v, for the first time. The sub digraph D1 of these n−1 arcs together with the n vertices is a spanning arborescence of D, rooted at vertex v.


Length of Directed Path =

Chromatic Number - 1


A directed graph has an Eulerian cycle if and only if every vertex has equal in degree and out degree

A directed graph has an Eulerian trail if and only if at most one vertex has (out-degree) − (in-degree) = 1, at most one vertex has (in-degree) − (out-degree) = 1, every other vertex has equal in-degree and out-degree,

A digraph is said to be disconnected if it is not even weak. A digraph is said to be strictly

weak if it is weak, but not unilateral.

It is strictly unilateral, if it is unilateral but not strong. Two vertices of a digraph D are said

to be

i. 0-connected if there is no semi path joining them,

ii. 1-connected if there is a semi path joining them, but there is no u−v path or v−u path,

iii. 2-connected if there is a u−v or a v−u path, but not both,

iv. 3-connected if there is u−v path and a v−u path.


Every planar graph with |V| has a nonplanar complement 


If both graph G and its graph complement G’ are planar, then G has eight or fewer vertices.

For a simple, connected, planar graph with V vertices and E edges and F faces, the following simple conditions hold for V ≥ 3:

  • E ≤ 3V − 6
  • If there are no cycles of length 3,  then E ≤ 2V − 4.
  • F ≤ 2V − 4.


Euler's formula

V − E + F = 2


Planar graph density

D = (E-V+1)/(2V-5)

D=0 → Completely Sparse graph

D=1 → Completely Dense graph




Kuratowski's theorem

a finite graph is planar if and only if it does not contain a subgraph that is a subdivision of K5 (the complete graphon five vertices) or of K3,3


Wagner's theorem

a finite graph is planar if and only if its minors include neither K5 (the complete graph on five vertices) nor K3,3 


A graph is planar if it has a combinatorial dual graph.

Only planar graphs have duals. 



Balaji Jegan posted in Discrete Mathematics Oct 24, 2018 edited Sep 4, 2020 by Balaji Jegan


Day Date Contents Slides Assignments
 1  Sep 14 Introduction to probability- simple problems- Rolling a die n times and find the probability of getting at least one six, 
probability of forming two men and two women out of four people
 Lecture Notes - Best one  
2 Sep 17 Class Test- 5 Questions: Time-15 minutes
1. 3 men and 3 women are to be seated (i) In a row of chairs (ii) Around a table. What is the probability that all women are seated together in both cases?
2. Divide 52 cards into 4 sets of 13. What is the probability that each set has an Ace?
Answers in different methods.
3. Roll a die 12 times- probability that each digit appears exactly twice
Question Paper-Test1  
3 Sep 18 Probability Space- pairwise disjoint events 
x^2+y^2 < 1 -to find probability of an area in this circular portion
1)In a class there are 9 students and each of them buy a gift and distribute randomly. What is the probability that
i) A particular student gets back his/her own gift.
ii)Any of the student get back his/her gift.
2) Birthday problem
3) Coupon collector problem
Conditional Probability
Problem: Two fair blind-folded coins is tossed. Given that atleast one head is tossed. What is the probability that both tosses are heads?
First Bayes' formula- Problem: Flip a fair coin. If a head is comes out, then a dice is rolled. If a tail comes out, two dice are rolled. Compute the probability of getting exactly one six? 
Independent events
 4  Sep 20 Bernoulli Trial
Problem: A and B play a series of games. A lost first game. What is the probability that A wins the series?
Random variable- X : Ω → R
Example : Tossing two fair coins. X is the random variable denoting number of heads.
Probability Mass Function(PMF)- examples
Expectation, Variation, Standard deviation - with examples
Linearity of expectation
 5  Sep 21-Sep 24 Solve questions from GO pdf and  Lecture Notes    
  6 Sep 27 Continuous probability distributions - Probability density function
Uniform random variable - Problem:Assume that X is uniform on [0, 1]. What is P (X ∈ Q)?What is the probability that the binary expansion of X starts with 0.010?
Exponential random variable- Memoryless property: P (X ≥ x + y|X ≥ y) = e^ {−λx}
Problem: Assume that a lightbulb lasts on average 100 hours. Assuming exponential distribution, compute the probability that it lasts more than 200 hours and the probability that it lasts less than 50 hours.
Normal random variable- standard normal distribution
Problem: Assume that X is Normal with mean μ = 2 and variance σ^2 = 25. Compute the probability that X is between 1 and 4.
Expectation, Variance, PDF, Cumulative function of continuous random variables

For more information visit GO Classroom:

Manoja Rajalakshmi A posted in Probability Sep 27, 2018
Day Date Contents Slides Assignments
1 July 2 No discussion   Assignment 1
2 July 3 Introduction to Sets, Relations, Functions    
3 July 4 Equivalence Relations, Types of Functions   Assignment 2
  July 5 No Discussion: Problem solving from GO PDF    
4 July 17  

 Introduction to Groups

5 July 18  Groups, Fermat's Little Theorem. Introduction to Ring, Field.
 Partial Orders, Lattice.


Day Date Contents Slides Assignments
1 July 6 Propositional Logic, Truth Table, Implications, Validity, Satisfiability    
  July 7 No discussion Problem Solving from GO PDF    
 2 July 8  First Order Logic - Existential and Universal Quantifiers,
DeMorgan's law, How to check for validity/satisfiability of 
FOL statements



Day Date Contents Slides Assignments
1 July 9 Introduction to Combinatorics    
2 July 10 Balls & Bins problems - 4 Types Slide 1  
3 July 11  Balls & Bins problems continuation, Stirling's number of second kind Slide 2 Assignment 1
  July 12  Problem solving from GO PDF    
4 July 13  Integer Partition Problem Revisited   Assignment 2
5 July 16  Generating Functions, how to write a generating function for a given sequence, solution using calculus, generating function for Fibonacci series    
6 July 17  Recurrence Relation
  •  How to solve a recurrence relation recursively 
  •  Number of binary\ternary strings of length n having k consecutive 0's
7 July 20  Catalan Number, Recurrence for it, Generating Function Solution, Pigeohole Principle and its generalization. Example problems to be solved from Rosen.    


Day Date Contents Slides Assignments
 1  Aug 24 Graph theory- Introduction-definition of graph, simple graph, finite graph, degree of a vertex, isolated and pendant vertices, null graph, subgraph of a graph, walk-open and closed, path, circuit,Euler line, Euler graph, Hamiltonian circuit, Hamiltonian path, examples, Complete graph, tree, eccentricity-centre, spanning tree of graph, connected components of graph, branch, chord, rank and nullity of graph, distance in spanning tree, Cut-set - some lemmas    
 2  Aug 25-Aug 29 No discussion, Solve previous year GATE questions    
 3  Aug 30 Graph colouring- chromatic number, matching  Graph theory  

For more information visit GO Classroom :



Manoja Rajalakshmi A posted in Discrete Mathematics Sep 17, 2018 recategorized Sep 17, 2018 by Manoja Rajalakshmi A
I am not able to solve previous years questions of GATE. I learnt the basics but getting struck in almost every question. Please suggest some resource for the same considering the time constraint.
Bad_Doctor posted in Probability Dec 11, 2017

I found this channel on YouTube which has short videos for Maths and Discrete Maths topics.

Watch the videos for topics you want to revise.

Amitesh Sharma posted in Discrete Mathematics Jan 7, 2017
To see more, click for the full list of questions or popular tags.