https://www.gravatar.com/avatar/c1450fce3073eeafd465e3ebcda19285?s=240&d=mp

Magic, Music and Math

Computer Engineering Student

Syllabus of BCT

Course Structure for Bachelor’s Degree in Computer Engineering Here I presented the TU, IOE Syllabus of BCT. 1. Year: 1, Semester: Odd Code Title Exam Type Theory Ass Theory Final Practical Ass Practical Final Total SH401 Engineering Mathematics I T 20 80 100 CT401 Computer Programming B 20 80 50 150 ME401 Engineering Drawing I P 60 40 100 SH402 Engineering Physics B 20 80 20 30 150 CE401 Applied Mechanics T 20 80 100 EE401 Basic Electrical Engineering B 20 80 25 125 Total Marks 725 2.

Nepali_OCR_Python

Nepali OCR detector In this post I am going to present Nepali Optical Character Recognition (OCR) that extracts Nepali text from images and scanned documents so that it can be edited, formatted, indexed, searched, or translated. Below mentioned code are written in python, using easyocr as a heart of this post. OCR Optical character recognition or optical character reader (OCR) is the electronic or mechanical conversion of images of typed, handwritten or printed text into machine-encoded text, whether from a scanned document, a photo of a document, a scene-photo (for example the text on signs and billboards in a landscape photo) or from subtitle text superimposed on an image (for example: from a television broadcast).

Julia_Set_Fractal

Julia Set The Julia set is named after the French mathematician Gaston Julia who investigated their properties circa 1915 and culminated in his famous paper in 1918. While the Julia set is now associated with a simpler polynomial, Julia was interested in the iterative properties of a more general expression, namely z4 + z3/(z-1) + z2/(z3 + 4 z2 + 5) + c. The Julia set is now associated with those points z = x + iy on the complex plane for which the series zn+1 = zn2 + c does not tend to infinity.

AutoEncoder

AUTO ENCODER In this notebook you will find how auto encoder are trained in tensorflow(Keras). This notebook have two different type of encoder i.e normal * Auto Encoder * and * Denoising Auto Encoder * . Normal Encoder LIBRARY ARE LOADED 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 %matplotlib inline %config InlineBackend.figure_format = 'retina' import matplotlib.pyplot as plt import pandas as pd import numpy as np import seaborn as sns import warnings warnings.

Simple Math

NUMPY AND PYTHON DERIVATIVE OF polynomial function 1 2 3 4 5 6 7 8 9 10 import numpy as np def basis_function(x): a=np.eye(x) b=np.zeros((x,1)) c=np. concatenate((b,a),axis=1) d=np.array(range(1,x+1)).reshape(x,1) e=c *d return e enter coeff.of equation in given pattern :- constant,x,x^2,x^3,…. 1 2 3 4 #ENTER HERE [\\\\\\\\\\\\\] d=np.array([34,90,34,90]) #NUMBER MUST BE SEPERATED BY COMMA d 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 print(f'your equation is {d[0]}+{d[1]}x +{d[2]}x^2-----') #TO FIND LENGTH e=len(d) y=e-1 #RESHAPE d=d.

Unsupervised

VAE VAE TYPES Auto Encoder This is like a simple generative algorithm in which one model is used to reduce the data like simple compression and other is for increasing data like decompressor. This model contain different neural network model for both of task. Here there is difficult to develop or generate determined image from compressed laten space. Variational Autoencoders Up to now, we have discussed dimensionality reduction problem and introduce autoencoders that are encoder-decoder architectures that can be trained by gradient descent.