Andthenwe'regonnaimportflattenedbecausewe'regoingtobeusing a convolutionallneuralnetwork.
But a commonthingthatpeoplearegoingtodoisyouwannamayberightbeforeyouhaveyouroutputlater, you'regoingtodo, like, just a simple, denselayer, Whichiswhywe'reimportingthatthey're, umAnd, um, Todothat, you'llneedtofirstflattenthedata.
Sowe'vegotthosethings.
Wealsowantcalmto d becausewe'vegot a twodimensionalconvolution, alllayersthatwe'regonnabeusingandthenalso, we'regonnahavepoolinglayerssoimportantthoseaswell, that's primemadbecauseitcrossedoverthemaybenotjustsayingthatwehaven't usedhim.
Soourmodelisgoingtobe a sequentialthen, um, thenextthingwe'regonnadoistostartaddinginallofthelayerstothemodel, and, uh, I don't reallyseemuchpointinwritingallthatoneout.
SocopypasteSo, uh, what's goingonhereiswe'rejustaddinginthelatersoyou'vegottaconvolution a layerconvolution a layer, thenwe'vegotsomepooling.
Sowe'regonnasaythelearningrateandtheone I eventuallysettledonwasactuallywonthenegativefour.
Um, typicallywillactuallystartwiththeoneyounegativethreeandthenyoukeepdecayingdownto a one e negativefourUm, butinthiscase, therewasreallynolearningthatwastakingplacewiththeoneineightofthree.
Soactuallystartedatfour.
And I neverevendecaydid.
I justkindofleftitthere.
We'llseeastimegoeson.
Andifwehave a much, much, muchlargertrainingsetthatinjustdifferentdataaremorecomplexdatathatwemightfindthatwecangetawaywith a smallerlearningrateofdecayandallthat.
Sosometimes, ifyou'rejustlikewatchingitandyou'rejustkindoftakingmentalnoteoflossoraccuracyorwhateveryoucanmiss, there's a lotoftrendsthatyou'regoingtomissthataregonnabeprettyusefultoseevisualizedinsomethingliketenseorbored.
Sothat's whythat's, umthat's there.
Okay.
Um, yeah, I think I'm gonnacutithereandinthenexttutorial.