Final Exam Update Huawei

August 23, 2022 | Author: Anonymous | Category: N/A
Share Embed Donate


Short Description

Download Final Exam Update Huawei...

Description

 

True or False When a functon is called in Pyhon, he immuable objecs such as number and characer are called by value.  

True

o

False

When a functon is called in Pyhon, muable objecs such as lis and dictonary are called by reference.  

True

o

False

The actvaton functon in he neural neworks can be a non-linear functon 

o

True False

Overng occurs only in regression problems, no in classicaton problems. o

True



False

The main dierence beween he ID3 and C4.5 algorihms lies in he evaluaton crieria of node classicaton 

True

o

False

The C4.5 algorihm uses he Gini index as he evaluaton crieria for node classicaton. 

o

True False

If he number of layers of a neural nework is oo large, gradien disappearance or gradien explosion may occur. 

True

o

False

For daa wih wo dimensions, when k-means is used for clusering, he clusering resul is displayed as a sphere in he space. 

True

o

False

Aer raining he suppor vecor machine (SVM), you can only reain he suppor vecor and discard all non-suppor vecors. The classicaton capabiliy of he model remains unchanged. 

True

o

False

In he convolutonal neural nework (CNN), convolutonal layers and pooling layers mus appear alernaely.  

True

o

False

If here is a complex non-linear relatonship beween independen variable x and dependen variable y, he ree model may be used as a regression mehod.  

True

o

False

Principal componen analysis (PCA) can grealy reduce he daa dimension when mos informaton of he original daase is conained.  

True

o

False

 

In Pyhon, he tle() functon can capialize he inital leer of a sring.  

True

o

False

In Pyhon, when an objec is deleed, he desrucor functon is auomatcally called. 

True

o

False

In Pyhon, multple inheriance is suppored during class deniton. 

True

o

False

In Pyhon, satc variables and satc mehods are insances. o

True



False

Convolutonal neural nework (CNN) can only be used o solve visual problems and canno be used for naural language processing. o

True



False

Suppor vecor machine (SVM) has a good eec in dealing wih high-dimensional nonlinear problems.  

o

In

True False

Pyhon,

a

satc

mehod

can

be

direcly

accessed

and

does

CLASSNAME.STATIC_METHOD_NAME(). o

True



False

In Pyhon, he sring functon capialize() can capialize he inital leer of a sring. o

True



False

no

need

o

be

called

using

 

Multple Choice Single Answers. Assume ha here is a simple mult-layer percepron (MLP) model wih hree neurons and he inpu [1, 2, 3], and he weighs of he neurons are 4, 5, and 6 respectvely. If he actvaton functon is a linear consan value 3 (he actvaton functon is y = 3x), which of he following values is he oupu? A.32 B. 48 C. 96 D. 128

Assume ha he saemen prin(6.3 – 5.9 == 0.4) is execued in he Pyhon inerpreer, and he resul is False. Which of  he following saemens abou he resul is rue? A. The Boolean operaton canno be used for comparing oatng-poin numbers B. I is caused by he prioriy of operaors C. Pyhon canno exacly represen oatng-poin numbers D. In Pyhon, he non-zero value is inerpreed as false

For a neural nework, which of he following iems has he bigges impac on overng or underng? A. Inital weighs B. Learning rae C. Number of nodes a he hidden layer D. None of he above

Which of he following inroduces nonlineariy ino a neural nework? A. Sochastc gradien descen B. Rected linear uni (ReLu) C. Convoluton functon D. None of he above Assume ha raining daa is sucien, and he daase is used o rain a decision ree. To reduce he tme required for model raining, which of he following saemens is rue? A. Increase he deph of he ree B. Reduce he deph of he ree C. Increase he learning rae D. Reduce he learning rae

Imbalanced daa of binary classicaton refers o he daase wih a large dierence beween he proporton of positve samples and he proporton of negatve samples, for example, 9:1. If a classicaton model is rained based on he daase and he accuracy of he model on raining samples is 90%, which of he following saemens is rue? A. The accuracy of he model is high, and he model does no need o be optmized. B. The accuracy of he model is no satsfacory, and he model needs o be rerained aer daa sampling. C. The model qualiy canno be evaluaed D. None of he above. Which of he following is no a classicaton algorihm? A. Nonlinear separable suppor vecor machine. B. Logistc regression C. Principal componen analysis D. Random fores Which of he following saemens abou suppor vecor machines (SVM) is false? A. SVM is a binary classicaton model B. In high-dimensional space, SVM uses hyperplanes wih he maximum inerval for classicaton. c lassicaton. C. Kernel functons can be used o consruc nonlinear separable SVM D. The basic concep of kernel functons is o classify daa hrough dimensionaliy reducton

 

Which of he following assumptons can be made abou linear regression? A. I is imporan o nd ouliers because linear regression is sensitve o ouliers. B. Linear regression requires ha all variables be in normal disributon. C. Linear regression assumes ha daa does no have multple linear correlatons. D. None of he above

Which of he following procedures is no a procedure for building a decision ree? A. Feaure selecton B. Decision ree generaton C. Finding he suppor vecor D. Pruning

When decision ree is used for classicaton, if he value of an inpu feaure is contnuous, he dichoomy is used o discretze he contnuous aribue. I means ha he classicaton is performed based on wheher he value is greaer han or less han a hreshold. If he mult-pah division is used, each value is divided ino a branch. Wha is he bigges problem of his mehod? A. The computng workload is oo heavy. B. The performance of boh he raining se and he es se is poor. C. The performance of he raining se is good, bu he performance of he es se is poor. D. The performance of he raining se is poor, and he performance of he es se is good.

For a daase wih only one dependen variable x, wha is he number of coecien(s) required o consruc a simples linear regression model? A. 1 B. 2 C. 3 D. 4

Which of he following algorihms is no an ensemble algorihm? A. XGBoos B. GBDT C. Random fores D. Suppor vecor machine (SVM)

Assume ha a classicaton model is buil using logistc regression o obain he accuracy of raining samples and es samples. Then, add a new feaure o he daa, keep oher feaures unchanged, and rain he model again. Which of he following saemen is rue? amples will deniely decrease. A. The accuracy of raining ssamples B. The accuracy of es samples will deniely decrease. C. The accuracy of raining samples s amples remains unchanged or increases. D. The accuracy of es samples remains unchanged or increases. Second Answer: C. Abou he values of four variables a, b, c, and d aer executng he following code, which of he following saemens is false? impor copy a = [1, 2, 3, 4, [‘a’,’b’] b=a c = copy.copy(a) d = copy.deepcopy(a) a.append(5) a[4].append(‘c’) A. a == [1,2,3,4,[‘a’,’b’,’c’],5] B. b == [1,2,3,4,[‘a’,’b’,’c’],5] C. c == [1,2,3,4,[‘a’,’b’,’c’]] D. d == [1,2,3,4,[‘a’,’b’,’c’]]

 

The synax of sring formang is ? A. GNU\’s No %s %%’ % ’UNIX’ B. ‘GNU\’s No %d %%’ % ’UNIX’ C. ‘GNU’s No %s %%’ % ’UNIX’ D. ‘GNU’s No %d %%’ % ’UNIX’

Which of he following saemens abou a neural nework is rue? A. Increasing he number of neural nework layers may increase he classicaton error rae of a es se. B. Reducing he number of neural nework layers can always reduce he classicaton error rae of a es se. C. Increasing he number of neural nework layers can always reduce he classicaton error rae of a raining se. D. The neural nework can fully  all daa

For a mult-layer percepron (MLP), he number of nodes a he inpu layer is 10, and he number of nodes a he hidden layer is 5. The maximum number of connectons from he inpu layer o he hidden layer is? A. I depends on he siuaton. B. Less han 50 C. Equal o 50 D. Greaer han 50

Assume ha here is a rained deep neural nework model for identfying cas and dogs, and now his model will be used o deec he locatons of cas in a new daase. Which of he following saemens is rue? A. Rerain he existng model using a new daase. B. Remove he las layer of he nework and rerain he existng model. C. Adjus he las several layers of he nework and change he las layer o he regression layer. D. None of he above Which of he following saemens abou he k-neares neighbor (KNN) algorihm is false? A. KNN is a non-parameric mehod me hod which is usually used in daases wih irregular decision boundaries. B. KNN requires huge computng amoun C. The basic concep of KNN is “Birds of a feaher ock ogeher” D. The key poin of KNN is node spling

Assume he raining daa is sucien, and he daase is used o rain a decision ree. To reduce he tme required for model raining, which of he following saemens is rue? A. Increase he deph of he ree B. Reduce he deph of he ree C. Increase he learning rae D. Reduce he learning rae

If you wan o predic he probabiliy of n classes (p1, p2, …, pk), and he sum of probabilites of n classes is equal o 1, which of he following functons can be used as he actvaton functon in he oupu layer? A. somax B. ReLu C. sigmoid D. anh

Which of he following is false? A. (1) B. (1,) C. (1, 2) D. (1, 2, (3, 4))

 

Which of he following saemens abou srings is false? A. Characers should be considered as a sring of one characer B. A sring wih hree single quoatons (“”) can conain special characers such as line feed and carriage reurn. C. A sring ends wih \0 D. A sring can be creaed by using a single-quoaton mark(‘) or double quoaton marks (‘’). In Pyhon 3.7, he resul of executng he code prin(ype(3/6) is? A. in B. oa C. 0 D. 0.5 Is i necessary o increase he size of a convolutonal kernel o improve he eec of a convolutonal neural nework (CNN)? A. Yes B. No C. I depends on he siuaton D. Uncerain Deep learning can be used in which of he following naural language asks? A. Sentmenal Analysis B. Q&A sysem C. Machine ranslaton D. All of he above If you use he actvaton functon “X” a he hidden layer of a neural nework and give any inpu o a specic neuron, you will ge he oupu [-0.0001]. Which of he following functons is “X”? A. ReLuo B. anho C. sigmoido D. None of he above Which of he following functons canno be used as an actvaton functon of a neural nework? A. y = sin(x) B. y = anh(x) C. y = max(0, x) D. y = 2x Polysemy can be dened as he coexisence of multple meanings of a word or phrase in a ex objec. Which of he following mehods is he bes choice o solve his problem? A. Convolutonal neural nework (CNN) B. Gradien explosion C. Gradien disappearance D. All of he above In deep learning, a large number of marix operatons are involved. Now he produc ABC of hree dense marices A, B and C needs o be calculaed. Assume ha sizes of he hree marices are m x n, n x p, and p x q respectvely, and m < n < p < q, hen which of he following calculaton sequences is he mos ecien one? A. (AB)C B. A(BC) C. (AC)B D. A(CB) Assume ha here are wo neural neworks wih dieren oupu layers, There is one oupu node in he oupu layer of  nework newo rk 1, whereas whereas here are wo oupu oupu nodes in he oupu layer of newo nework rk 2. For a binary binary classicaton classicaton problem problem,, which of he following mehods do you choose? A. Use nework 1 B. Use nework 2 C. Eiher of hem can be chosen o use

 

D. Neiher of hem can be chosen When a pooling layer is added o a convolutonal neural nework (CNN), will he ranslaton invariance be reained? A. Uncerain B. I depends on he acual siuaton C. Yes, i will be reained D. No, i will no be reained Which of he following variable names is rue? A. daa? B. ?daa C. _daa D. 9daa The resul of executng he code prin(‘a’ ‘b’ D. c

The resul of invoking he following functon is? def basefunc(rs): def innerfunc(second): reurn rs ** second reurn innerfunc A. base(2)(3) == 8 B. base(2)(3) == 6 C. base(3)(2) == 8 D. base(3)(2) == 6

The resul of executng he following code is? daa = [1, 3, 5, 7] daa.append([2, 4, 6, 8]) prin(len(daa)) A. 4 B. 5 C. 8 D. An error occurred The resul of executng he following code is? for i in range(1,3): prin(i) for j in range(2):

 

prin(j) A. 1 3 2 B. 1 2 0 1 C. 1 3 0 1 D. 1 3 0 2 Generally, which of he following mehods is used o predic contnuous independen variables? A. Linear regression B. Logistc regression C. Boh linear regression and logistc regression D. None of he above Daa scientss may use multple algorihm (models) a he same tme for predicton, and inegrae he resuls of hese algorihms for nal predicton (ensemble learning). Which of he following saemens abou ensemble learning is rue? A. High correlaton exiss beween single models B. Low correlaton exiss beween single models C. I is beer o use weighed average insead of votng in ensemble learning D. One algorihm is used for a single model

 

Multple Choice Multple Answer Principal componen analysis (PCA) is a common and eectve mehod for dimensionaliy reducton. Which of he following saemens abou PCA are rue? A. Before using PCA, daa sandardizaton is required. B. Before using PCA, daa sandardizaton is no required. C. The principal componen wih he maximum variance should be seleced. D. The principal componen wih he minimum variance should be seleced. Which of he following crieria can evaluae he qualiy of a model? A. Accurae predicton can be achieved by he model in acual services. B. Wih he increasing rac volume, he predicton rae of he model is stll accepable C. The model design is complex and dicul o undersand and explain. D. The user inerface of he service sysem where he model is locaed is user-friendly.

Which of he following saemens abou he gradien boostng decision ree (GBDT) algorihm are rue? A. Increasing he minimum number of samples used for segmenaton helps preven over B. Increasing he minimum number of samples used for segmenaton may cause overng. C. Reducing he sample rato of each basic ree helps reduce he variance. D. Reducing he sample rato of each basic ree helps reduce he deviaton.

Variable selecton is used o selec he bes discriminaor subse. Wha need o be considered o ensure he eciency of  he model? A. Wheher multple variables have he same functon. B. Wheher he model is inerpreable C. Wheher he feaure carries valid informaton D. Price dierence vericaton

Which of he following actvaton functons can be used for image classicaton a he oupu layer? A. sigmoid* B. anh* C. ReLu * D. Piecewise functons

Which of he following assumptons are used o derive linear regression parameers? A. There is a linear relatonship beween independen variables and dependen variables B. Model errors are independen in satstcs C. The error generally obeys he normal disributon of 0 and he sandard deviaton of he xed average value D. The independen variable is non-random and has no measuremen error.

Which of he following saemens abou he convolutonal neural nework (CNN) are rue? A. Increasing he size of convolutonal kernels can signicanly improve he performance of he CNN. B. Pooling layers in he CNN keep ranslaton invariance. C. Daa feaures need o be exraced before using a CNN. D. In a CNN, he convolutonal kernel a each layer is he weigh o be learned.

Which of he following saemens abou TensorFlow 2.0 are rue? A. TensorFlow 2.0 requires he consructon of a compuatonal graph a rs, hen you can sar a session, impor daa o he session, and perform raining. B. Eager executon is enabled in TensorFlow 2.0 by defaul. I is a ype of command line programming, making he executon simpler. C. In TensorFlow 2.0, if you wan o build a new layer, you can direcly inheri .keras.layers.Layer D. In he defaul mode of TensorFlow 2.0, .daa.Daase is an ieraor. Which of he following saemens abou he applicaton of deep learning mehods are rue?

 

A. Massive discree daa can be encoded using embedded mode as inpu of he neural nework, which grealy improves he eec of daa analysis. B. The convolutonal neural nework (CNN) is well applied in he eld of image processing, bu i canno be used in naural language processing. C. The recurren neural nework (RNN) is mainly used o deal wih sequence-o-sequence problems, bu i oen encouners he problems of gradien disappearance and gradien explosion. D. The generatve adversarial nework (GAN) is a mehod used for model generaton. When daa volume exceeds he capaciy capaciy of he memory, which of he following following mehods used o eectvely eectvely rain he model? A. Organizing he daa and supplementng he missing daa B. Sampling daa and raining models based on he sampled daa C. Reducing daa dimensions using he PCA algorihm D. Improving daa capaciy hrough inerpolaton mehod Which of he following measures can be aken o preven overng in he neural nework? A. Dropou B. Daa augmenaton C. Weigh sharing D. Early sopping

Abou he single-underscored member_proc, double-underscored _proc member, and _proc_in Pyhon, which of he following saemens are rue? A. from module impor * can be direcly used o impor he single-underscored member _proc B. from module impor * canno be direcly used o impor he double-underscored member _proc C. In Pyhon, he parser uses_classname_proc o replace he double-underscored member _proc D. In Pyhon, _proc_ is a specic indicaor o magic mehods The core idea of convolutonal neural nework (CNN) are? A. Mainly for image daa processing B. Local receptve eld C. Parameer sharing D. High-qualiy daa inpu and high-qualiy oupu

Which of he following issues need o be considered when you selec he deph of a neural nework? A. Neural nework ypes B. Inpu daa ype and quanty C. Learning rae D. Developmen framework o be used

Which of he following saemens abou long shor-erm memory (LSTM) are rue? s electvely forge he inpu ransferred from he previous node. A. The forge phase of LSTM is o selectvely B. The selectve memory phase of LSTM is o selectvely memorize he inpu. C. The updae phase of LSTM is o updae he memory informaton. D. The oupu phase of LSTM is o deermine which will be considered as he oupu of curren sae. Which of he following saemens abou deep learning are rue? A. The negatve side of ReLu is a dead zone, leading o he gradien becomes 0. B. The sigmoid functon is beer han he ReLu functon in preventng he gradien disappearance problem. C. The long shor erm memory (LSTM) adds several channels and gaes based on he recurren neural nework (RNN) D. Gaed recurren uni (GRU) is a simplied version of LSTM.

 

Which of he following operatons belong o he daa cleansing process? A. Pro Proce cess ssin ing g los los da daa a B. Pro Proces cessin sing g abn abnorm ormal al val values ues C. Obai Obaining ning daa daa ha is dicul dicul o be obained obained by by ohers h hrough rough special special channel channelss D. Co Comb mbin inin ing g da daa a

The neural nework is inspired by he human brain. A neural nework consiss of many neurons, and each neuron receives an inpu and provides an oupu aer processing he inpu. Which of he following saemens abou neurons are rue? A. Each neuron can have one inpu and one oupu B. Each neuron can have multple inpus and one oupu C. Each neuron can have one inpu and multple oupus D. Each neuron can have multple inpus and oupu

Which of he following layers are usually included in a deep neural nework used for image recogniton? A. Convolutonal layer B. Pooling layer C. Recurren layer D. Fully conneced layer

Wha are he dierences beween _ini_and_new_in Pyhon? A. _ini_ is an insance mehod, whereas_new_is a satc mehod B. No value is reurned for _ini_, whereas an insance is reurned for_new_. C. _new_ is called o creae an insance, whereas _ini_ is called o initalize an insance D. Only when_new_reurns a cls insance, he subsequen_ini_can be called. Feaures selecton is necessary before model raining. Which of he following saemens are he advanages of feaure selecton? A. I can improve model generalizaton and avoid overng. B. I can reduce he tme required for model raining. C. I can avoid dimension explosion. D. I can simplify models o make hem easy for users o inerpre. Daa cleansing is o clear diry daa in a daase. The diry daa refers o? A. Daa ha is sored in he devices aeced by some polluans. B. Daa ha conains incorrec records or exceptons C. Daa ha conains conradicory and inconsisen records D. Daa ha lacks some feaures or conains some missing values

If daa = (1, 3, 5, 7, 7 , 9, 11), which of he following operatons are valid? A. daa[1 : -1] B. daa[1 : 7] C. lis(daa) D. daa * 3 If here is a = range(100), which of he following operatons are valid? A. a[-1] B. a[2 : 99] C. a[ : - 1 : 2] D. a[5 - 7]

Which of he following saemens abou he recurren neural nework (RNN) are rue? A. The sandard RNN solves he problem of informaton memory. Is advanage is ha even if he number of memory unis is limied, he RNN can keep he long-erm informaton. B. The sandard RNN can sore conex saes and can exend on he tme sequences.

 

C. The sandard RNN capures dynamic informaton in serialized daa by periodical connecton of nodes a he hidden layer D. Inuitvely, here is no need o connec nodes beween he hidden layer a he curren momen and he hidden layer a he nex momen in he RNN.

During neural nework raining, which of he following phenomena indicae ha gradien explosion problem may occur? A. The model gradien increases rapidly B. The model weigh changes o NaN value C. The error gradien value of each node and layer contnuously exceeds 1.0. D. The loss functon contnuously decreases.

When he parameers are he same in all cases, and how does he number of sample observaton tmes aec overng? A. The number of observaton tmes is small, and overng is likely o occur. B. The number of observaton tmes is small, and overng is no likely o occur. C. The number of observaton tmes is large, and overng is likely o occur. D. The number of observaton tmes is large, and overng is no likely o occur

Which of he following saemens are he functons of he pooling layer in a convolutonal neural nework (CNN)? A. Reducing he inpu size for he nex layer B. Obaining xed-lengh daa C. Increasing he scale D. Preventng overng

Which of he following saemens abou generatve adversarial nework (GAN) are rue? A. The GAN conains a generatve model (generaor) ha akes a random vecor as inpu and decodes i as a specic oupu. B. The GAN conains an adversarial model (adversarial device) ha ransforms specic inpu and oupus adversarial daa ha conradics he inpu. C. The GAN conains a discriminatve model (discriminaor) ha can deermine wheher he inpu daa is from he raining se or synhesized hrough daa. D. The GAN is a dynamic sysem. Is optmizaton process is no o nd a minimum value, bu o nd a balance beween wo forces.

View more...

Comments

Copyright ©2017 KUPDF Inc.
SUPPORT KUPDF