Jk Carta Mobile Media
Outsourcing concede alle aziende la libertà di discarica no core, ancora importanti settori della sua amministrazione in aziende specializzate in quella zona. 1. L'outsourcing consente di liberare tempo e risorse che consentono di concentrarsi sulle attività di core business. 2. L'outsourcing consente di risparmiare denaro in costi del personale. Le spese di un dipendente-contabile includono i salari, pagati tempo libero, le imposte sui salari, le tasse di disoccupazione, assicurazione della compensazione dei lavoratori, e dei benefici. Inoltre, è necessario fornire lo spazio di lavoro, mobili per ufficio, forniture per ufficio, software e computer. I Here8217s media cui si dovrebbe considerare l'outsourcing: imprenditore spende cinque o più ore alla settimana il personale la gestione di contabilità. Con l'outsourcing le funzioni di contabilità, si riceve servizi da un professionista a una frazione del costo. 3. outsourcing le contabilità è un modo molto più efficace per organizzare le finanze per l'uomo fiscale L'Agenzia delle Entrate canadese è molto più propensi ad accettare il parere di un servizio di contabilità stimabile che una società in house contabilità valutazione. Una società contabilità professionale sarà in grado di organizzare i record in modo che essi saranno in grado di comprendere. In breve, i professionisti contabili parlano il linguaggio del CRA. La maggior parte dei proprietari di imprese non lo fanno, e non hanno il tempo per imparare. Risparmio di denaro per le tasse e il risparmio di tempo sul potenziale audit uno dei più grandi modi per risparmiare denaro, come un piccolo imprenditore. Una grande maggioranza delle piccole imprese che non riescono farlo sotto il peso di un carico fiscale insieme ad altre spese. contabilità in outsourcing è un vero beneficio di riduzione dei costi per le piccole imprese. GHVA presenta: Spostare il vostro business nella giusta direzione con assistenza virtuale Siete invitati a un amplificatore Networking sessione informativa su come assistenti virtuali possono aiutare a scaricare un po 'di stress e carico di lavoro si faccia ogni modo redditizio giorno Janet Barclay, organizzata assistente Laurie Meyer, Il successo Ufficio Soluzioni Salma Burney, Virtual Girl Venerdì Jacquie Manore, Workload Solution Services Inc. chiave Relatore: dott Dave Howlett, fondatore e amministratore delegato di RealHumanBeing. org presentare una porzione della sua presentazione Come collegare (come un essere umano reale) Signor . Howletts seminari hanno lasciato migliaia di persone ispirate e determinato a fare la cosa giusta per se stessi, le loro aziende, ei loro figli. Si terrà una porzione 15 minuto della sua fama Come collegare presentazione. Costo: 20.00 davanti alla porta, o pre-registrati e risparmiare 5,00 prezzo include il parcheggio e della ristorazione da Pepperwood. Buone documenti contabili significa avere un buon sistema di archiviazione. Senza uno non avete l'altra. Tenere la contabilità up-to-date. Dal lato delle vendite, se si don8217t fornire una ricevuta fattura o si don8217t pagato. Gli acquisti devono essere effettuati mensilmente o trimestralmente per abbinare il vostro periodo di riferimento GST. Don8217t lasciare ogni anno solo perché questo è il vostro periodo di riferimento GST. ragioni eccellenti per mantenere la contabilità up-to-date nel mio articolo Bookkeeping8230Why fastidio. Quando si paga una bolletta 8211 record di data e modalità di pagamento. Assegno se it8217s pagati con assegno o carta di credito sulla quale è stato pagato. Se it8217s un parziale di pagamento - la importo e la data di ciascun pagamento. Ora le informazioni sono a portata di mano per entrare in tuoi libri. It8217s una cosa semplice, ma che l'informazione può essere utile per avere 6 o 12 mesi lungo la strada. Sempre ottenere una ricevuta - gli acquisti in contanti sono difficili da sostenere il contrario, e sì Tim Hortons vi darà una ricevuta se chiedete. Se le entrate sono così sbiaditi o sgualcito che rende quindi illeggibile 8211 indovinate 8211 che don8217t vengono inseriti nei libri. estratti conto della carta di credito non sono sempre una prova sufficiente. Un articolo comprato da Wal-Mart potrebbe essere qualsiasi cosa e il fatto che è stato acquistato con il tuo biglietto da visita non prova una deduzione di business. Fare scivola dettagliate di deposito e di conservarne una copia. Ultimo ho controllato le banche sono state ancora dando fuori libretti di deposito liberi. O acquistare un semplice quaderno. Tenere un registro dettagliato di ogni deposito ci aiuta a abbinare il pagamento del cliente per il deposito sul conto bancario. Utilizzare un calendario per ricordare le date di scadenza, se you8217re il monitoraggio delle seguenti tasse 8211 PST, GST, Payroll, WCB, reddito trimestrale. Effettuare pagamenti in tempo vi terrà fuori dal buco arretrati fiscali con il Canada Revenue Agency. Vedere di più su questo nel mio articolo Come ha fatto cadere così in profondità nel arretrati fiscali Smart Business la gente sa che il tempo è denaro, per pianificazione per il futuro. record organizzate renderà più facile la vita per il vostro contabile se quella persona è te stesso o qualcuno you8217re pagare. Nel caso in cui si hanno rapporti con il Canada Revenue l'uomo d'affari con i record organizzati avrà un tempo molto più facile che la persona che non è. Nella sezione 230 della legge sull'imposta sul reddito, qualsiasi persona che svolge un'attività in Canada e chi è tenuto a pagare o riscuotere le tasse, deve tenere libri e registri al loro posto di lavoro o di residenza, in Canada, in tale formato o per consentire la valutazione e il pagamento delle tasse. La maggior parte delle persone nel mondo degli affari sono consapevoli che esiste un modo corretto di tenere i libri. Per coloro che non sono a conoscenza, è importante rendersi conto che Revenue Canada ha il potere di richiedere di tenere i libri adeguati. Buone documenti contabili significa avere un buon sistema di archiviazione. Senza uno si don8217t avere l'altro. Set-up di un sistema di archiviazione che è possibile seguire e usarlo. Questo è probabilmente il primo più importante passo per mantenere buoni dischi. sistemi di file semplici sono facili da configurare e mantenere. GST Filers trimestrali Il vostro ritorno GST per AprilMayJune 2008 è dovuta 31 luglio 2008. Come faccio a sapere se un I8217m Filer trimestrale Fuori nome 8220Goods e servizi TaxHarmonized Sales Tax (GSTHST) Restituire il modulo GST per Registrants8221. Le informazioni essenziali avete bisogno sono le tre caselle in alto a destra sulla pagina 1. La prima casella indica la data di scadenza della vostra rimessa, la seconda casella mostra il numero di conto di business e la terza casella mostra il periodo di riferimento. Oppure si potrebbe essere un filer annuale. La scatola periodo di riferimento vi dirà l'intervallo di date per il periodo di remittente. Quanto devo pagare Organizzare le ricevute di vendita per calcolare la GST raccolti sulle vendite. Dal 1 ° gennaio 2008 il tasso di GST è 5. raccogliere e organizzare le ricevute di business per calcolare la GST pagata sugli acquisti. Sottrarre GSTPurchases da GSTSales e rinviare la differenza al ricevitore generale. (I8217m supponendo che le vendite sono state maggiori di acquisti.) Se il vostro GSTPurchases è maggiore di GSTSales si può avere un rimborso, ma tutto dipende. Ci sono sempre eccezioni alla regola. Al giorno d'oggi ci sono molti modi per effettuare il pagamento. Si CAN - snail posta un assegno visitate questa vostra locale banking-online banca - GST NetFile CRA-arc. gc. camenu-e. html - GST Telefile CRA-arc. gc. camenu-e. html Invia il pagamento in tempo. Il ricevitore generale è molto spietato di ritardo e applicherà le sanzioni e gli interessi compounding quotidiana. Clicca su questo link al sito Canada Revenue Agency per tutto quello che avreste sempre voluto sapere sulla GST. cra-arc. gc. cataxbusinesstopicsgstmenu-e. html Alzi la mano chi you8217ve iniziato a lavorare nel 2008 la contabilità. Eccellente e il resto di voi. Cosa stai aspettando Perché aspettare fino al 30 aprile per vedere i risultati di questo lavoro anni. Avviando ora è possibile creare una dichiarazione di perdita di amplificatore di profitto che vi mostrerà se you8217ve fatto o perso denaro e come speso. Questo rapporto è un pezzo incredibile di informazioni che possono aiutare a ora più tardi. Non si utilizza i servizi di un contabile o fai da te Indossiamo molti cappelli durante il tentativo di eseguire il nostro business e forse we8217re indossare troppi. Se si lotta con la contabilità, e so it8217s non è un compito piacevole, allora forse si dovrebbe prendere in considerazione ottenere qualche aiuto. La maggior parte dei Ragionieri professionali forniranno l'outsourcing della formazione lavoro in uso del software o di aiuto nel capire cosa categoria di spesa da utilizzare. Estratto da una casa a base article - Don8217t affacciano managementbookkeeping. La mancanza di competenze manageriali è uno dei singoli più alte cause di fallimento. Seguire dei corsi, consultare un esperto o assumere un aiuto, ma apprendere le competenze di gestione di base prima di iniziare. canadabusiness. caservletContentServerpagenameCBSCFEdisplayampcGuideFactSheetampcid1081945277281en Naturalmente è necessario un certo tipo di sistema per la registrazione di tutto. Questo potrebbe essere un programma di contabilità, foglio di calcolo o su supporto cartaceo. Nella casella di commento per favore fatemelo sapere che tipo di sistema si utilizza per la contabilità. I8217d piace molto sapere. In un prossimo articolo I8217ll pubblicare le mie scoperte insieme a informazioni sui vari sistemi. La Canadian Ragionieri Association (CBA) è una, senza fini di lucro impegnata nazionale per il progresso della contabili professionali. L'appartenenza alla CBA fornisce contabili con le risorse per avere successo in un ambiente in continuo cambiamento. La nostra associazione crea l'eccellenza attraverso la conoscenza e sta crescendo rapidamente, con un approccio di gestione finanziaria globale alla business per aziende di ogni dimensione. La nostra adesione è in rapida crescita ogni giorno e rappresenta contabili nella maggior parte delle province e territori Canada8217s. La nostra missione include: di promuovere, sostenere, prevedere ed incoraggiare Ragionieri canadesi. Per promuovere e aumentare la consapevolezza di Contabilità in Canada come una disciplina professionale. Per supportare il networking nazionale, regionale e locale, tra Ragionieri canadesi. Per fornire informazioni sulle procedure all'avanguardia, formazione e tecnologie che migliorano l'industria, così come, il professionista contabilità canadese. Per sostenere e incoraggiare le pratiche di contabilità responsabili e precisi in tutto il Canada. Siamo impegnati a una crescita a beneficio dei nostri soci e Contabilità in Canada come una disciplina professionale. I nostri obiettivi sono i progressi nella formazione a distanza, certificazione dei Ragionieri e dei capitoli regionali. Apprezziamo i suggerimenti che consentono di migliorare il sito e l'Associazione. Stiamo ascoltando e apprezziamo il vostro input Stiamo lavorando per la designazione dei contabili in Canada. La designazione sarà 8220Certified professionale Bookkeeper8221 La Canadian Ragionieri associazione è stata formalmente conosciuto come il canadese Ragionieri Alliance. CBA ha iniziato ad accettare i membri all'inizio del 2003. 9 Feb 2004 la Canadian Ragionieri associazione è stata costituita come associazione senza fini di lucro. crescita associativa ha superato di gran lunga quello che era originariamente previsto. Siamo entusiasti con la crescita dell'Associazione. Siamo cresciuti con ogni pietra miliare nella nazionale senza fini di lucro che siamo oggi con i membri in quasi ogni provincia e territory. Using reti neurali per riconoscere cifre scritte a mano Percettroni Sigmoid neuroni L'architettura delle reti neurali Una semplice rete di classificare scritto a mano apprendimento cifre con la discesa del gradiente Attuare la nostra rete di classificare cifre verso apprendimento profondo Come l'algoritmo backpropagation funziona Warm up: un approccio veloce a matrice a base di calcolo l'uscita da una rete neurale le due ipotesi abbiamo bisogno di circa la funzione di costo il prodotto di Hadamard, s ODOT t le quattro equazioni fondamentali alla base backpropagation prova delle quattro equazioni fondamentali (opzionale) l'algoritmo backpropagation il codice per backpropagation In che senso è backpropagation un algoritmo backpropagation veloce: il quadro generale miglioramento del modo in cui le reti neurali imparare la cross-entropia funzione di costo sovradattamento e regolarizzazione peso riconoscimento di inizializzazione della grafia rivisitato: il codice Come scegliere un reti neurali iper-parametri Altre tecniche, una prova visiva che le reti neurali in grado di calcolare qualsiasi funzione Due avvertimenti Universalità con un ingresso e un'uscita molte variabili di input di estensione oltre i neuroni sigma fissaggio le funzioni passo conclusione Perché le reti neurali profonde difficili da addestrare il problema gradiente di fuga cosa che causa la scomparsa problema gradiente gradienti instabile in reti neurali profonde gradienti instabile in reti più complesse Altri ostacoli all'apprendimento profondo apprendimento profondo introduzione di reti convoluzionali reti neurali convoluzionali, in pratica, il codice per la nostra reti convoluzionali recenti progressi nel riconoscimento delle immagini Altri approcci per profonde reti neurali sul futuro delle reti neurali Appendice: c'è un semplice algoritmo per l'intelligenza grazie a tutti i tifosi che hanno reso il libro possibile, con especial grazie a Pavel Dudrenov. Grazie anche a tutti i collaboratori della Bugfinder Hall of Fame. Apprendimento profondo. libro di Ian Goodfellow, Yoshua Bengio, e Aaron Courville Nell'ultimo capitolo abbiamo visto come le reti neurali possono imparare i loro pesi e pregiudizi utilizzando l'algoritmo di discesa del gradiente. C'era, tuttavia, una lacuna nella nostra spiegazione: non abbiamo discusso come calcolare il gradiente della funzione di costo. Quello è un bel gap In questo capitolo Ill spiegare un algoritmo veloce per calcolare tali pendenze, un algoritmo noto come backpropagation. L'algoritmo backpropagation è stato originariamente introdotto nel 1970, ma la sua importanza wasnt pienamente apprezzato fino a quando un famoso saggio del 1986 da David Rumelhart. Geoffrey Hinton. e Ronald Williams. Questo articolo descrive diverse reti neurali dove backpropagation funziona molto più velocemente rispetto agli approcci precedenti per l'apprendimento, rendendo possibile l'utilizzo di reti neurali per risolvere i problemi che erano stati in precedenza insolubile. Oggi, l'algoritmo backpropagation è il cavallo di battaglia di apprendimento in reti neurali. Questo capitolo è più matematicamente coinvolta rispetto al resto del libro. Se non sei pazzo di matematica si può essere tentati di saltare il capitolo, e per il trattamento backpropagation come una scatola nera i cui dettagli youre disposti a ignorare. Perché prendere il tempo per studiare i dettagli Il motivo, ovviamente, è la comprensione. Al centro di backpropagation è un'espressione per la derivata parziale C parziali parziali w della funzione di costo C rispetto a qualsiasi peso w (o polarizzazione b) nella rete. L'espressione ci dice quanto velocemente il costo cambia quando cambiamo i pesi e pregiudizi. E mentre l'espressione è piuttosto complesso, ha anche una bellezza ad esso, con ciascun elemento avendo una naturale interpretazione intuitiva. E così backpropagation isnt solo un algoritmo veloce per l'apprendimento. In realtà ci dà intuizioni dettagliate sul modo in cui la modifica dei pesi e pregiudizi modifica il comportamento complessivo della rete. Quello è vale la pena studiare in dettaglio. Detto questo, se si vuole sfogliare capitolo, o vai direttamente al capitolo successivo, questo è bene. Ive ha scritto il resto del libro sia accessibile anche se si trattano backpropagation come una scatola nera. Ci sono, naturalmente, i punti più avanti nel libro in cui mi riferisco di nuovo ai risultati da questo capitolo. Ma a quei punti si dovrebbe comunque essere in grado di comprendere le principali conclusioni, anche se non seguire tutte le ragionamento. Prima di discutere backpropagation, lascia riscaldare con un algoritmo matriciale veloce per calcolare l'uscita da una rete neurale. In realtà abbiamo già brevemente visto questo algoritmo verso la fine del capitolo precedente. ma ho descritto in fretta, quindi vale la pena rivisitare in dettaglio. In particolare, questo è un buon modo di prendere confidenza con la notazione usata in backpropagation, in un contesto familiare. Iniziamo con una notazione che ci permette di riferimento per i pesi nella rete in modo inequivocabile. Ebbene utilizzare wl per indicare il peso per la connessione dal k neurone del (l-1) Strato al neurone j nello strato l. Così, per esempio, il seguente schema illustra il peso su una connessione dal quarto neurone nel secondo strato al secondo neurone nel terzo strato di rete: Questa notazione è ingombrante prima, e ci vuole un certo lavoro per master. Ma con un piccolo sforzo youll trovare la notazione diventa facile e naturale. Uno scherzo della notazione è l'ordinamento degli indici J e K. Si potrebbe pensare che ha più senso usare j per fare riferimento al neurone di input, e K per il neurone di output, non viceversa, come è effettivamente fatto. Ill spiegare la ragione di questo capriccio di seguito. Usiamo una notazione simile per le reti pregiudizi e le attivazioni. Esplicitamente, usiamo BLJ per la polarizzazione del neurone j nello strato l. E usiamo alj per l'attivazione del neurone j nello strato l. Il diagramma seguente mostra alcuni esempi di queste notazioni in uso: Con queste notazioni, AJ attivazione del neurone j nello strato l è legato alle attivazioni nello strato (L-1) dall'equazione (confrontare equazione (4) inizia frac nonumberend e la discussione che circonda nell'ultimo capitolo) cominciano aj sigmaleft (sumk WAK BLJ destra), fine tag in cui la somma è superiore a tutti i neuroni k nella l-1) strato (. Per riscrivere questa espressione in una forma di matrice definiamo un wl matrice di peso per ogni strato, l. Le voci del wl matrice dei pesi sono solo i pesi di collegamento allo strato l di neuroni, cioè, la voce nella riga colonna j, k è wl. Analogamente, per ogni strato l definiamo un vettore di polarizzazione. bl. Probabilmente si può intuire come funziona - le componenti del vettore di polarizzazione sono solo i valori BLJ, un componente per ogni neurone nello strato l. E, infine, si definisce un vettore al di attivazione cui componenti sono il ALJ attivazioni. L'ultimo ingrediente che dobbiamo riscrivere (23) iniziano una sigmaleft j (w sumk un k BLJ destra) nonumberend in una forma di matrice è l'idea di vettorizzare una funzione come sigma. Ci siamo incontrati vettorizzazione brevemente nel capitolo precedente, ma per ricapitolare, l'idea è che vogliamo applicare una funzione come Sigma per ogni elemento in un vettore v. Usiamo l'ovvio notazione sigma (v) per indicare questo tipo di applicazione elementwise di una funzione. Cioè, i componenti del sigma (v) sono solo sigma (v) j sigma (vj). Per fare un esempio, se abbiamo la funzione f (x) x2 quindi la forma vettorizzati di f ha l'effetto iniziare fLeft (a sinistra inizio 2 3 fine a destra a destra) ha lasciato iniziare f (2) f (3) porre fine a destra a sinistra iniziare 4 9 porre fine a destra, fine tag che è, il vectorized f solo piazze ogni elemento del vettore. Con queste notazioni in mente, l'equazione (23) iniziano una sigmaleft j (w sumk un k BLJ destra) nonumberend può essere riscritta nella bella e compatta forma vectorized iniziare un sigma (wl un bl). fine tag Questa espressione ci dà un modo molto più globale di pensare a come le attivazioni in un solo strato riguardano attivazioni nel livello precedente: abbiamo solo applicare la matrice peso alle attivazioni, quindi aggiungere il vettore di polarizzazione, e applicare infine la funzione di Sigma Tra l'altro, la sua espressione questa che motiva il cavillo nella notazione wl accennato in precedenza. Se abbiamo usato j all'indice neurone input e k all'indice neurone uscita, allora wed necessità di sostituire la matrice peso nell'equazione (25) iniziare una sigma (wl un bl) nonumberend dal trasposta della matrice peso. Questo è un piccolo cambiamento, ma fastidioso, e sposare perdere la facile semplicità del dire (e pensare) applicare la matrice dei pesi di attivazioni. Questo punto di vista globale è spesso più facile e più succinta (e coinvolge un numero inferiore di indici) rispetto al neurone-by-neurone vista weve preso ad ora. Pensate a come un modo per sfuggire indice di inferno, pur rimanendo precise su cosa sta succedendo. L'espressione è utile anche in pratica, perché la maggior parte le biblioteche di matrice forniscono modi rapidi di attuazione moltiplicazione di matrici, somma vettoriale, e vettorializzazione. Infatti, il codice nel capitolo fatto uso implicito di questa espressione per calcolare il comportamento della rete. Quando si utilizza l'equazione (25) iniziano un sigma (wl un bl) nonumberend per calcolare al, calcoliamo la quantità intermedia zl equiv wl un bl lungo la strada. Questa quantità si rivela essere abbastanza utile per valere di denominazione: noi chiamiamo ZL l'ingresso ponderata ai neuroni nello strato l. Bene fare uso considerevole della zl ingresso ponderato più avanti nel capitolo. L'equazione (25) iniziare una sigma (wl un bl) nonumberend è talvolta scritto in termini di input ponderata, come al sigma (zl). La sua anche la pena notare che zl ha componenti ZLJ sumk wl un kblj, cioè ZLJ è solo l'ingresso ponderata alla funzione di attivazione per neurone j in layer l. L'obiettivo di backpropagation è quello di calcolare le derivate parziali parziale C parziale C w e parziale parziali b della funzione di costo C rispetto a qualsiasi peso w o polarizzazione b nella rete. Per backpropagation per lavorare abbiamo bisogno di fare due ipotesi principali sulla forma della funzione di costo. Prima di affermare queste ipotesi, però, la sua utile avere una funzione di esempio di costo in mente. Bene utilizzare la funzione di costo quadratica da ultimo capitolo (c. f. equazione (6) cominciano C (w, b) equiv frac SUMX y (x) - a2 nonumberend). Nella notazione dell'ultima sezione, il costo quadratica ha la forma cominciare C frac SUMX y (x) - al (x) 2, fine tag dove: n è il numero totale di esempi di addestramento somma è superiore esempi individuali di formazione, xyy (x) rappresenta l'uscita corrispondente L desiderata denota il numero di strati in rete e al al (x) è il vettore di uscita attivazioni dalla rete quando x è input. Ok, allora che le ipotesi abbiamo bisogno di rendere la nostra funzione di costo, C, in modo che backpropagation può essere applicata La prima ipotesi abbiamo bisogno è che la funzione di costo può essere scritta come una SUMX media C frac Cx su funzioni di costo Cx per i singoli esempi di addestramento, x. Questo è il caso per la funzione di costo quadratica, dove il costo per un singolo esempio di formazione è Cx frac y-al 2. Questa ipotesi sarà anche valere per tutte le altre funzioni di costo e si incontrano in questo libro. Il motivo per cui abbiamo bisogno di questa ipotesi è perché ciò che backpropagation permette effettivamente ci facciamo è calcolare le derivate parziali parziale Cx parziale w e parziale Cx parziale B per un solo esempio di formazione. Abbiamo poi recuperare parziale C parziale w e parziale C parziale b con una media di oltre esempi di addestramento. Infatti, con questa ipotesi in mente, ben supporre l'esempio di formazione x è stato risolto, e rilasciare il pedice x, scrivendo il costo Cx come C. Bene finalmente mettere la x indietro, ma per ora il suo un fastidio di notazione che è meglio lasciata implicita. La seconda assunzione facciamo circa il costo è che può essere scritta come una funzione delle uscite della rete neurale: Per esempio, le soddisfa funzione di costo quadratica questo requisito, in quanto il costo quadratica per un solo esempio di formazione x può essere scritto come iniziare C frac y-Al2 frac sumj (YJ-ALJ) 2, fine tag ed è quindi in funzione delle attivazioni di uscita. Naturalmente, questa funzione di costo dipende anche sull'uscita y desiderato, e si può chiedere perché non erano sul costo anche in funzione di y. Ricordate, però, che l'ingresso formazione esempio x è fissa, e quindi l'uscita y è un parametro fisso. In particolare, non è qualcosa che può modificare cambiando i pesi e le distorsioni in qualsiasi modo, vale a dire la sua non qualcosa che la rete neurale apprende. E quindi ha senso considerare C in funzione delle attivazioni uscita aL solo, con y soltanto un parametro che contribuisce a definire tale funzione. L'algoritmo backpropagation si basa su comuni operazioni algebriche lineari - cose come somma vettoriale, moltiplicando un vettore per una matrice, e così via. Ma una delle operazioni è un po 'meno comunemente usati. In particolare, supporti s e t sono due vettori della stessa dimensione. Poi usiamo s ODOT t per designare il prodotto elementwise dei due vettori. Così i componenti di s ODOT t sono solo (s ODOT t) j sj tj. A titolo di esempio, iniziare leftbegin 1 2 estremità destra ODOT leftbegin 3 4end destra a sinistra cominciano 1 2 3 4 estremità destra a sinistra inizierà 3 8 estremità destra. fine tag Questo tipo di moltiplicazione elementwise è talvolta chiamato il prodotto di Hadamard o il prodotto Schur. Bene riferirsi ad esso come il prodotto di Hadamard. Buone biblioteche matrice di solito fornire implementazioni veloci del prodotto Hadamard, e che torna utile in sede di attuazione backpropagation. Backpropagation tratta di capire come cambiare i pesi e pregiudizi in una rete cambia la funzione di costo. In definitiva, questo significa calcolare le derivate parziali parziale C WL parziale e parziale C BLJ parziale. Ma per calcolare quelli, per prima introduce una quantità intermedia, deltalj, che chiamiamo errore nel neurone j nello strato l. Backpropagation ci darà una procedura per calcolare la deltalj errore, quindi riguarderà deltalj a parziale C wl parziale e parziale C BLJ parziale. Per capire come è definito l'errore, immaginare che ci sia un demone nella nostra rete neurale: Il demone siede al neurone j nello strato l. Come per l'ingresso al neurone entra, il demone pasticci con l'operazione di neuroni. Si aggiunge un po ZLJ variazione Delta ai neuroni di ingresso ponderato, in modo che invece di output sigma (ZLJ), il neurone uscite invece sigma (zljDelta ZLJ). Questo cambiamento si propaga attraverso gli strati successivi in rete, infine causando il costo complessivo per cambiare da una quantità frac Delta ZLJ. Ora, questo demone è un buon demone, e sta cercando di aiutare a migliorare i costi, vale a dire theyre cercando di trovare un ZLJ Delta che rende il costo minore. frac Supponiamo che ha un grande valore (positivo o negativo). Poi il demone può abbassare il costo un po 'scegliendo Delta ZLJ avere segno opposto al frac. Al contrario, se frac è vicino a zero, allora il demone sopraelevazione migliorare il costo molto a tutti perturbando il ZLJ di ingresso ponderato. Per quanto riguarda il demone può dire, il neurone è già abbastanza vicino ottimale Questo è solo il caso di piccole modifiche Delta ZLJ, naturalmente. Bene per scontato che il demone è costretto a fare tali piccoli cambiamenti. E così theres un senso euristica in cui frac è una misura dell'errore nel neurone. Motivati da questa storia, definiamo il deltalj errore di neurone j nello strato l per iniziare deltalj equiv frac. fine tag Come per le nostre solite convenzioni, usiamo deltaL per indicare il vettore di errori associati con lo strato l. Backpropagation ci darà modo di calcolare deltaL per ogni strato, e poi quegli errori relativi alle quantità di interesse reale, parziale C wl parziale e parziale C BLJ parziale. Si potrebbe chiedere perché il demone sta cambiando il ZLJ di ingresso ponderato. Sicuramente essere ITD più naturale immaginare il demone cambiare il ALJ attivazione dell'uscita, con il risultato che wed utilizzeranno frac come la nostra misura di errore. Infatti, se si fa questo le cose funzionano in modo simile alla discussione di seguito. Ma risulta per rendere la presentazione di backpropagation un po 'più algebricamente complicata. Così ben bastone con deltalj frac come la nostra misura di errore In problemi di classificazione come MNIST l'errore termine è talvolta usato per indicare il tasso di guasto di classificazione. Per esempio. se la rete neurale classifica correttamente 96,0 per cento delle cifre, quindi l'errore è 4,0 per cento. Ovviamente, questo ha un significato molto diverso dai nostri vettori delta. In pratica, non dovreste avere difficoltà a distinguere il significato che si intende in ogni utilizzo. Piano di attacco: Backpropagation si basa su quattro equazioni fondamentali. Insieme, queste equazioni ci danno modo di calcolare sia il deltaL errore e il gradiente della funzione di costo. Premetto le quattro equazioni di seguito. Attenzione, però: non dovreste aspetta di assimilare istantaneamente le equazioni. Tale aspettativa porterà alla delusione. Infatti, le equazioni backpropagation sono così ricchi che la loro comprensione e richiede molto tempo e pazienza, come si gradualmente addentrerete equazioni. La buona notizia è che tale pazienza viene ripagato molte volte. E così la discussione in questa sezione è solo un inizio, aiutandovi sulla strada per una conoscenza approfondita delle equazioni. Ecco un'anteprima dei modi ben scavare più a fondo nelle equazioni più avanti nel capitolo: Ill dare una breve prova di equazioni. che aiuta a spiegare il motivo per cui sono vere bene ribadire le equazioni in forma algoritmica come pseudocodice, e vedere come il pseudocodice può essere implementato come vero e proprio, esecuzione di codice Python e, nella parte finale del capitolo. ben sviluppare un quadro intuitiva del significato delle equazioni backpropagation, e come qualcuno potrebbe scoprire da zero. Lungo il percorso e tornare più volte per le quattro equazioni fondamentali, e come approfondire la comprensione quelle equazioni verranno a sembrare comodo e, forse, anche bello e naturale. Un'equazione per l'errore nel livello di uscita, deltaL: I componenti di deltaL sono date da iniziare deltaLj frac sigma (ZLJ). fine tag Si tratta di una espressione molto naturale. Il primo termine a destra, parziale C parziali ALJ, proprio misura quanto velocemente il costo sta cambiando in funzione dell'attivazione di uscita j. Se, ad esempio, C doesnt dipende molto su un particolare neurone di output, j, allora deltaLj sarà piccolo, che è ciò che sposare aspettarci. Il secondo termine a destra, sigma (ZLJ), misure quanto velocemente la funzione di attivazione sigma sta cambiando a ZLJ. Si noti che tutto in (BP1) cominciano deltaLj frac sigma (ZLJ) nonumberend è facilmente calcolato. In particolare, calcoliamo ZLJ mentre calcolare il comportamento della rete, e la sua solo una piccola overhead aggiuntivo per calcolare sigma (ZLJ). La forma esatta della parziale C ALJ parziali, naturalmente, dipende dalla forma della funzione di costo. Tuttavia, a condizione che la funzione di costo è noto ci dovrebbe essere poca fatica calcolo parziale C parziale ALJ. Ad esempio, se sono state utilizzando la funzione di costo quadratica allora C frac sumj (YJ-ALJ) 2, e così parziale C parziale ALJ (AJL-YJ), che ovviamente è facilmente calcolabile. L'equazione (BP1) cominciano deltaLj frac sigma (ZLJ) nonumberend è un'espressione componente per componente per deltaL. La sua una perfetta buona espressione, ma non la forma di matrice a base che vogliamo per backpropagation. Tuttavia, la sua facilità di riscrivere l'equazione in forma matriciale, come iniziare deltaL nablaa C sigma ODOT (ZL). fine tag, nablaa C è definito come un vettore le cui componenti sono le derivate parziali parziale C parziale ALJ. Si può pensare di nablaa C come esprimere il tasso di variazione di C rispetto alle attivazioni di uscita. La sua facile vedere che le equazioni (BP1a) cominciano deltaL nablaa C ODOT sigma nonumberend (ZL) e iniziare deltaLj frac sigma nonumberend (ZLJ) (BP1) sono equivalenti, e per questo motivo d'ora in poi anche utilizzare (BP1) iniziare deltaLj frac sigma (ZLJ) nonumberend modo intercambiabile per fare riferimento a entrambe le equazioni. Ad esempio, nel caso del costo quadratica abbiamo nablaa C (aL-y), e così la forma completamente a matrice di (BP1) iniziamo deltaLj frac sigma (ZLJ) nonumberend diventa iniziare deltaL (aL-y) odot Sigma (zL). fine tag Come si può vedere, tutto ciò che in questa espressione ha una bella forma vettoriale, ed è facilmente calcolato utilizzando una libreria come la Numpy. Un'equazione per la deltaL errore in termini di errore nel strato successivo, delta: In particolare iniziare deltaL ((w) T delta) sigma odot (zl), fine tag dove (w) T è la trasposta della matrice dei pesi w per la (l1) layer. Questa equazione appare complicato, ma ogni elemento ha una bella interpretazione. Supponiamo di conoscere il delta errore a livello di L1. Quando applichiamo la matrice trasposta di peso, (w) T, possiamo pensare intuitivamente questo come muovere l'errore all'indietro attraverso la rete, dandoci qualche misura dell'errore all'uscita dello strato l. Abbiamo poi prendere il prodotto ODOT sigma Hadamard (zl). This moves the error backward through the activation function in layer l, giving us the error deltal in the weighted input to layer l. By combining (BP2) begin deltal ((w )T delta ) odot sigma(zl) nonumberend with (BP1) begin deltaLj frac sigma(zLj) nonumberend we can compute the error deltal for any layer in the network. We start by using (BP1) begin deltaLj frac sigma(zLj) nonumberend to compute deltaL, then apply Equation (BP2) begin deltal ((w )T delta ) odot sigma(zl) nonumberend to compute delta , then Equation (BP2) begin deltal ((w )T delta ) odot sigma(zl) nonumberend again to compute delta , and so on, all the way back through the network. An equation for the rate of change of the cost with respect to any bias in the network: In particular: begin frac deltalj. tag end That is, the error deltalj is exactly equal to the rate of change partial C partial blj. This is great news, since (BP1) begin deltaLj frac sigma(zLj) nonumberend and (BP2) begin deltal ((w )T delta ) odot sigma(zl) nonumberend have already told us how to compute deltalj. We can rewrite (BP3) begin frac deltalj nonumberend in shorthand as begin frac delta, tag end where it is understood that delta is being evaluated at the same neuron as the bias b. An equation for the rate of change of the cost with respect to any weight in the network: In particular: begin frac a k deltalj. tag end This tells us how to compute the partial derivatives partial C partial wl in terms of the quantities deltal and a , which we already know how to compute. The equation can be rewritten in a less index-heavy notation as begin frac a delta , tag end where its understood that a is the activation of the neuron input to the weight w, and delta is the error of the neuron output from the weight w. Zooming in to look at just the weight w, and the two neurons connected by that weight, we can depict this as: A nice consequence of Equation (32) begin frac a delta nonumberend is that when the activation a is small, a approx 0, the gradient term partial C partial w will also tend to be small. In this case, well say the weight learns slowly . meaning that its not changing much during gradient descent. In other words, one consequence of (BP4) begin frac a k deltalj nonumberend is that weights output from low-activation neurons learn slowly. There are other insights along these lines which can be obtained from (BP1) begin deltaLj frac sigma(zLj) nonumberend - (BP4) begin frac a k deltalj nonumberend . Lets start by looking at the output layer. Consider the term sigma(zLj) in (BP1) begin deltaLj frac sigma(zLj) nonumberend . Recall from the graph of the sigmoid function in the last chapter that the sigma function becomes very flat when sigma(zLj) is approximately 0 or 1. When this occurs we will have sigma(zLj) approx 0. And so the lesson is that a weight in the final layer will learn slowly if the output neuron is either low activation (approx 0) or high activation (approx 1). In this case its common to say the output neuron has saturated and, as a result, the weight has stopped learning (or is learning slowly). Similar remarks hold also for the biases of output neuron. We can obtain similar insights for earlier layers. In particular, note the sigma(zl) term in (BP2) begin deltal ((w )T delta ) odot sigma(zl) nonumberend . This means that deltalj is likely to get small if the neuron is near saturation. And this, in turn, means that any weights input to a saturated neuron will learn slowly This reasoning wont hold if T delta has large enough entries to compensate for the smallness of sigma(zlj). But Im speaking of the general tendency. Summing up, weve learnt that a weight will learn slowly if either the input neuron is low-activation, or if the output neuron has saturated, i. e. is either high - or low-activation. None of these observations is too greatly surprising. Still, they help improve our mental model of whats going on as a neural network learns. Furthermore, we can turn this type of reasoning around. The four fundamental equations turn out to hold for any activation function, not just the standard sigmoid function (thats because, as well see in a moment, the proofs dont use any special properties of sigma). And so we can use these equations to design activation functions which have particular desired learning properties. As an example to give you the idea, suppose we were to choose a (non-sigmoid) activation function sigma so that sigma is always positive, and never gets close to zero. That would prevent the slow-down of learning that occurs when ordinary sigmoid neurons saturate. Later in the book well see examples where this kind of modification is made to the activation function. Keeping the four equations (BP1) begin deltaLj frac sigma(zLj) nonumberend - (BP4) begin frac a k deltalj nonumberend in mind can help explain why such modifications are tried, and what impact they can have. Alternate presentation of the equations of backpropagation: Ive stated the equations of backpropagation (notably (BP1) begin deltaLj frac sigma(zLj) nonumberend and (BP2) begin deltal ((w )T delta ) odot sigma(zl) nonumberend ) using the Hadamard product. This presentation may be disconcerting if youre unused to the Hadamard product. Theres an alternative approach, based on conventional matrix multiplication, which some readers may find enlightening. (1) Show that (BP1) begin deltaLj frac sigma(zLj) nonumberend may be rewritten as begin deltaL Sigma(zL) nablaa C, tag end where Sigma(zL) is a square matrix whose diagonal entries are the values sigma(zLj), and whose off-diagonal entries are zero. Note that this matrix acts on nablaa C by conventional matrix multiplication. (2) Show that (BP2) begin deltal ((w )T delta ) odot sigma(zl) nonumberend may be rewritten as begin deltal Sigma(zl) (w )T delta . tag end (3) By combining observations (1) and (2) show that begin deltal Sigma(zl) (w )T ldots Sigma(z ) (wL)T Sigma(zL) nablaa C tag end For readers comfortable with matrix multiplication this equation may be easier to understand than (BP1) begin deltaLj frac sigma(zLj) nonumberend and (BP2) begin deltal ((w )T delta ) odot sigma(zl) nonumberend . The reason Ive focused on (BP1) begin deltaLj frac sigma(zLj) nonumberend and (BP2) begin deltal ((w )T delta ) odot sigma(zl) nonumberend is because that approach turns out to be faster to implement numerically. Well now prove the four fundamental equations (BP1) begin deltaLj frac sigma(zLj) nonumberend - (BP4) begin frac a k deltalj nonumberend . All four are consequences of the chain rule from multivariable calculus. If youre comfortable with the chain rule, then I strongly encourage you to attempt the derivation yourself before reading on. Lets begin with Equation (BP1) begin deltaLj frac sigma(zLj) nonumberend . which gives an expression for the output error, deltaL. To prove this equation, recall that by definition begin deltaLj frac . tag end Applying the chain rule, we can re-express the partial derivative above in terms of partial derivatives with respect to the output activations, begin deltaLj sumk frac frac , tag end where the sum is over all neurons k in the output layer. Of course, the output activation aLk of the k neuron depends only on the weighted input zLj for the j neuron when k j. And so partial aLk partial zLj vanishes when k neq j. As a result we can simplify the previous equation to begin deltaLj frac frac . tag end Recalling that aLj sigma(zLj) the second term on the right can be written as sigma(zLj), and the equation becomes begin deltaLj frac sigma(zLj), tag end which is just (BP1) begin deltaLj frac sigma(zLj) nonumberend . in component form. Next, well prove (BP2) begin deltal ((w )T delta ) odot sigma(zl) nonumberend . which gives an equation for the error deltal in terms of the error in the next layer, delta . To do this, we want to rewrite deltalj partial C partial zlj in terms of delta k partial C partial z k. We can do this using the chain rule, begin deltalj frac tag sumk frac k frac k tag sumk frac k delta k, tag end where in the last line we have interchanged the two terms on the right-hand side, and substituted the definition of delta k. To evaluate the first term on the last line, note that begin z k sumj w alj b k sumj w sigma(zlj) b k. tag end Differentiating, we obtain begin frac k w sigma(zlj). tag end Substituting back into (42) begin sumk frac k delta k nonumberend we obtain begin deltalj sumk w delta k sigma(zlj). tag end This is just (BP2) begin deltal ((w )T delta ) odot sigma(zl) nonumberend written in component form. The final two equations we want to prove are (BP3) begin frac deltalj nonumberend and (BP4) begin frac a k deltalj nonumberend . These also follow from the chain rule, in a manner similar to the proofs of the two equations above. I leave them to you as an exercise. That completes the proof of the four fundamental equations of backpropagation. The proof may seem complicated. But its really just the outcome of carefully applying the chain rule. A little less succinctly, we can think of backpropagation as a way of computing the gradient of the cost function by systematically applying the chain rule from multi-variable calculus. Thats all there really is to backpropagation - the rest is details. The backpropagation equations provide us with a way of computing the gradient of the cost function. Lets explicitly write this out in the form of an algorithm: Input x: Set the corresponding activation a for the input layer. Feedforward: For each l 2, 3, ldots, L compute z wl a bl and a sigma(z ). Output error deltaL: Compute the vector delta nablaa C odot sigma(zL). Backpropagate the error: For each l L-1, L-2, ldots, 2 compute delta ((w )T delta ) odot sigma(z ). Examining the algorithm you can see why its called back propagation. We compute the error vectors deltal backward, starting from the final layer. It may seem peculiar that were going through the network backward. But if you think about the proof of backpropagation, the backward movement is a consequence of the fact that the cost is a function of outputs from the network. To understand how the cost varies with earlier weights and biases we need to repeatedly apply the chain rule, working backward through the layers to obtain usable expressions. Backpropagation with a single modified neuron Suppose we modify a single neuron in a feedforward network so that the output from the neuron is given by f(sumj wj xj b), where f is some function other than the sigmoid. How should we modify the backpropagation algorithm in this case Backpropagation with linear neurons Suppose we replace the usual non-linear sigma function with sigma(z) z throughout the network. Rewrite the backpropagation algorithm for this case. As Ive described it above, the backpropagation algorithm computes the gradient of the cost function for a single training example, C Cx. In practice, its common to combine backpropagation with a learning algorithm such as stochastic gradient descent, in which we compute the gradient for many training examples. In particular, given a mini-batch of m training examples, the following algorithm applies a gradient descent learning step based on that mini-batch: Input a set of training examples For each training example x: Set the corresponding input activation a , and perform the following steps: Output error delta : Compute the vector delta nablaa Cx odot sigma(z ). Backpropagate the error: For each l L-1, L-2, ldots, 2 compute delta ((w )T delta ) odot sigma(z ). Gradient descent: For each l L, L-1, ldots, 2 update the weights according to the rule wl rightarrow wl-frac sumx delta (a )T, and the biases according to the rule bl rightarrow bl-frac sumx delta . Of course, to implement stochastic gradient descent in practice you also need an outer loop generating mini-batches of training examples, and an outer loop stepping through multiple epochs of training. Ive omitted those for simplicity. Having understood backpropagation in the abstract, we can now understand the code used in the last chapter to implement backpropagation. Recall from that chapter that the code was contained in the updateminibatch and backprop methods of the Network class. The code for these methods is a direct translation of the algorithm described above. In particular, the updateminibatch method updates the Network s weights and biases by computing the gradient for the current minibatch of training examples: Most of the work is done by the line deltanablab, deltanablaw self. backprop(x, y) which uses the backprop method to figure out the partial derivatives partial Cx partial blj and partial Cx partial wl . The backprop method follows the algorithm in the last section closely. There is one small change - we use a slightly different approach to indexing the layers. This change is made to take advantage of a feature of Python, namely the use of negative list indices to count backward from the end of a list, so, e. g. l-3 is the third last entry in a list l . The code for backprop is below, together with a few helper functions, which are used to compute the sigma function, the derivative sigma, and the derivative of the cost function. With these inclusions you should be able to understand the code in a self-contained way. If somethings tripping you up, you may find it helpful to consult the original description (and complete listing) of the code. Fully matrix-based approach to backpropagation over a mini-batch Our implementation of stochastic gradient descent loops over training examples in a mini-batch. Its possible to modify the backpropagation algorithm so that it computes the gradients for all training examples in a mini-batch simultaneously. The idea is that instead of beginning with a single input vector, x, we can begin with a matrix X x1 x2 ldots xm whose columns are the vectors in the mini-batch. We forward-propagate by multiplying by the weight matrices, adding a suitable matrix for the bias terms, and applying the sigmoid function everywhere. We backpropagate along similar lines. Explicitly write out pseudocode for this approach to the backpropagation algorithm. Modify network. py so that it uses this fully matrix-based approach. The advantage of this approach is that it takes full advantage of modern libraries for linear algebra. As a result it can be quite a bit faster than looping over the mini-batch. (On my laptop, for example, the speedup is about a factor of two when run on MNIST classification problems like those we considered in the last chapter.) In practice, all serious libraries for backpropagation use this fully matrix-based approach or some variant. In what sense is backpropagation a fast algorithm To answer this question, lets consider another approach to computing the gradient. Imagine its the early days of neural networks research. Maybe its the 1950s or 1960s, and youre the first person in the world to think of using gradient descent to learn But to make the idea work you need a way of computing the gradient of the cost function. You think back to your knowledge of calculus, and decide to see if you can use the chain rule to compute the gradient. But after playing around a bit, the algebra looks complicated, and you get discouraged. So you try to find another approach. You decide to regard the cost as a function of the weights C C(w) alone (well get back to the biases in a moment). You number the weights w1, w2, ldots, and want to compute partial C partial wj for some particular weight wj. An obvious way of doing that is to use the approximation begin frac approx frac , tag end where epsilon 0 is a small positive number, and ej is the unit vector in the j direction. In other words, we can estimate partial C partial wj by computing the cost C for two slightly different values of wj, and then applying Equation (46) begin frac approx frac nonumberend . The same idea will let us compute the partial derivatives partial C partial b with respect to the biases. This approach looks very promising. Its simple conceptually, and extremely easy to implement, using just a few lines of code. Certainly, it looks much more promising than the idea of using the chain rule to compute the gradient Unfortunately, while this approach appears promising, when you implement the code it turns out to be extremely slow. To understand why, imagine we have a million weights in our network. Then for each distinct weight wj we need to compute C(wepsilon ej) in order to compute partial C partial wj. That means that to compute the gradient we need to compute the cost function a million different times, requiring a million forward passes through the network (per training example). We need to compute C(w) as well, so thats a total of a million and one passes through the network. Whats clever about backpropagation is that it enables us to simultaneously compute all the partial derivatives partial C partial wj using just one forward pass through the network, followed by one backward pass through the network. Roughly speaking, the computational cost of the backward pass is about the same as the forward pass This should be plausible, but it requires some analysis to make a careful statement. Its plausible because the dominant computational cost in the forward pass is multiplying by the weight matrices, while in the backward pass its multiplying by the transposes of the weight matrices. These operations obviously have similar computational cost. And so the total cost of backpropagation is roughly the same as making just two forward passes through the network. Compare that to the million and one forward passes we needed for the approach based on (46) begin frac approx frac nonumberend . And so even though backpropagation appears superficially more complex than the approach based on (46) begin frac approx frac nonumberend . its actually much, much faster. This speedup was first fully appreciated in 1986, and it greatly expanded the range of problems that neural networks could solve. That, in turn, caused a rush of people using neural networks. Of course, backpropagation is not a panacea. Even in the late 1980s people ran up against limits, especially when attempting to use backpropagation to train deep neural networks, i. e. networks with many hidden layers. Later in the book well see how modern computers and some clever new ideas now make it possible to use backpropagation to train such deep neural networks. As Ive explained it, backpropagation presents two mysteries. First, whats the algorithm really doing Weve developed a picture of the error being backpropagated from the output. But can we go any deeper, and build up more intuition about what is going on when we do all these matrix and vector multiplications The second mystery is how someone could ever have discovered backpropagation in the first place Its one thing to follow the steps in an algorithm, or even to follow the proof that the algorithm works. But that doesnt mean you understand the problem so well that you could have discovered the algorithm in the first place. Is there a plausible line of reasoning that could have led you to discover the backpropagation algorithm In this section Ill address both these mysteries. To improve our intuition about what the algorithm is doing, lets imagine that weve made a small change Delta wl to some weight in the network, wl : That change in weight will cause a change in the output activation from the corresponding neuron: That, in turn, will cause a change in all the activations in the next layer: Those changes will in turn cause changes in the next layer, and then the next, and so on all the way through to causing a change in the final layer, and then in the cost function: The change Delta C in the cost is related to the change Delta wl in the weight by the equation begin Delta C approx frac Delta wl . tag end This suggests that a possible approach to computing frac is to carefully track how a small change in wl propagates to cause a small change in C. If we can do that, being careful to express everything along the way in terms of easily computable quantities, then we should be able to compute partial C partial wl . Lets try to carry this out. The change Delta wl causes a small change Delta a j in the activation of the j neuron in the l layer. This change is given by begin Delta alj approx frac Delta wl . tag end The change in activation Delta al will cause changes in all the activations in the next layer, i. e. the (l1) layer. Well concentrate on the way just a single one of those activations is affected, say a q, In fact, itll cause the following change: begin Delta a q approx frac q Delta alj. tag end Substituting in the expression from Equation (48) begin Delta alj approx frac Delta wl nonumberend . we get: begin Delta a q approx frac q frac Delta wl . tag end Of course, the change Delta a q will, in turn, cause changes in the activations in the next layer. In fact, we can imagine a path all the way through the network from wl to C, with each change in activation causing a change in the next activation, and, finally, a change in the cost at the output. If the path goes through activations alj, a q, ldots, a n, aLm then the resulting expression is begin Delta C approx frac frac n frac n p ldots frac q frac Delta wl , tag end that is, weve picked up a partial a partial a type term for each additional neuron weve passed through, as well as the partial Cpartial aLm term at the end. This represents the change in C due to changes in the activations along this particular path through the network. Of course, theres many paths by which a change in wl can propagate to affect the cost, and weve been considering just a single path. To compute the total change in C it is plausible that we should sum over all the possible paths between the weight and the final cost, i. e. begin Delta C approx sum frac frac n frac n p ldots frac q frac Delta wl , tag end where weve summed over all possible choices for the intermediate neurons along the path. Comparing with (47) begin Delta C approx frac Delta wl nonumberend we see that begin frac sum frac frac n frac n p ldots frac q frac . tag end Now, Equation (53) begin frac sum frac frac n frac n p ldots frac q frac nonumberend looks complicated. However, it has a nice intuitive interpretation. Were computing the rate of change of C with respect to a weight in the network. What the equation tells us is that every edge between two neurons in the network is associated with a rate factor which is just the partial derivative of one neurons activation with respect to the other neurons activation. The edge from the first weight to the first neuron has a rate factor partial a j partial wl . The rate factor for a path is just the product of the rate factors along the path. And the total rate of change partial C partial wl is just the sum of the rate factors of all paths from the initial weight to the final cost. This procedure is illustrated here, for a single path: What Ive been providing up to now is a heuristic argument, a way of thinking about whats going on when you perturb a weight in a network. Let me sketch out a line of thinking you could use to further develop this argument. First, you could derive explicit expressions for all the individual partial derivatives in Equation (53) begin frac sum frac frac n frac n p ldots frac q frac nonumberend . Thats easy to do with a bit of calculus. Having done that, you could then try to figure out how to write all the sums over indices as matrix multiplications. This turns out to be tedious, and requires some persistence, but not extraordinary insight. After doing all this, and then simplifying as much as possible, what you discover is that you end up with exactly the backpropagation algorithm And so you can think of the backpropagation algorithm as providing a way of computing the sum over the rate factor for all these paths. Or, to put it slightly differently, the backpropagation algorithm is a clever way of keeping track of small perturbations to the weights (and biases) as they propagate through the network, reach the output, and then affect the cost. Now, Im not going to work through all this here. Its messy and requires considerable care to work through all the details. If youre up for a challenge, you may enjoy attempting it. And even if not, I hope this line of thinking gives you some insight into what backpropagation is accomplishing. What about the other mystery - how backpropagation could have been discovered in the first place In fact, if you follow the approach I just sketched you will discover a proof of backpropagation. Unfortunately, the proof is quite a bit longer and more complicated than the one I described earlier in this chapter. So how was that short (but more mysterious) proof discovered What you find when you write out all the details of the long proof is that, after the fact, there are several obvious simplifications staring you in the face. You make those simplifications, get a shorter proof, and write that out. And then several more obvious simplifications jump out at you. So you repeat again. The result after a few iterations is the proof we saw earlier There is one clever step required. In Equation (53) begin frac sum frac frac n frac n p ldots frac q frac nonumberend the intermediate variables are activations like aq . The clever idea is to switch to using weighted inputs, like z q, as the intermediate variables. If you dont have this idea, and instead continue using the activations a q, the proof you obtain turns out to be slightly more complex than the proof given earlier in the chapter. - short, but somewhat obscure, because all the signposts to its construction have been removed I am, of course, asking you to trust me on this, but there really is no great mystery to the origin of the earlier proof. Its just a lot of hard work simplifying the proof Ive sketched in this section. In academic work, please cite this book as: Michael A. Nielsen, Neural Networks and Deep Learning, Determination Press, 2015 This work is licensed under a Creative Commons Attribution-NonCommercial 3.0 Unported License. This means youre free to copy, share, and build on this book, but not to sell it. If youre interested in commercial use, please contact me. Last update: Thu Jan 19 06:09:48 2017Using neural nets to recognize handwritten digits Perceptrons Sigmoid neurons The architecture of neural networks A simple network to classify handwritten digits Learning with gradient descent Implementing our network to classify digits Toward deep learning How the backpropagation algorithm works Warm up: a fast matrix-based approach to computing the output from a neural network The two assumptions we need about the cost function The Hadamard product, s odot t The four fundamental equations behind backpropagation Proof of the four fundamental equations (optional) The backpropagation algorithm The code for backpropagation In what sense is backpropagation a fast algorithm Backpropagation: the big picture Improving the way neural networks learn The cross-entropy cost function Overfitting and regularization Weight initialization Handwriting recognition revisited: the code How to choose a neural networks hyper-parameters Other techniques A visual proof that neural nets can compute any function Two caveats Universality with one input and one output Many input variables Extension beyond sigmoid neurons Fixing up the step functions Conclusion Why are deep neural networks hard to train The vanishing gradient problem Whats causing the vanishing gradient problem Unstable gradients in deep neural nets Unstable gradients in more complex networks Other obstacles to deep learning Deep learning Introducing convolutional networks Convolutional neural networks in practice The code for our convolutional networks Recent progress in image recognition Other approaches to deep neural nets On the future of neural networks Appendix: Is there a simple algorithm for intelligence Thanks to all the supporters who made the book possible, with especial thanks to Pavel Dudrenov. Thanks also to all the contributors to the Bugfinder Hall of Fame. Deep Learning. book by Ian Goodfellow, Yoshua Bengio, and Aaron Courville In the last chapter we saw how neural networks can learn their weights and biases using the gradient descent algorithm. There was, however, a gap in our explanation: we didnt discuss how to compute the gradient of the cost function. Thats quite a gap In this chapter Ill explain a fast algorithm for computing such gradients, an algorithm known as backpropagation . The backpropagation algorithm was originally introduced in the 1970s, but its importance wasnt fully appreciated until a famous 1986 paper by David Rumelhart. Geoffrey Hinton. and Ronald Williams. That paper describes several neural networks where backpropagation works far faster than earlier approaches to learning, making it possible to use neural nets to solve problems which had previously been insoluble. Today, the backpropagation algorithm is the workhorse of learning in neural networks. This chapter is more mathematically involved than the rest of the book. If youre not crazy about mathematics you may be tempted to skip the chapter, and to treat backpropagation as a black box whose details youre willing to ignore. Why take the time to study those details The reason, of course, is understanding. At the heart of backpropagation is an expression for the partial derivative partial C partial w of the cost function C with respect to any weight w (or bias b) in the network. The expression tells us how quickly the cost changes when we change the weights and biases. And while the expression is somewhat complex, it also has a beauty to it, with each element having a natural, intuitive interpretation. And so backpropagation isnt just a fast algorithm for learning. It actually gives us detailed insights into how changing the weights and biases changes the overall behaviour of the network. Thats well worth studying in detail. With that said, if you want to skim the chapter, or jump straight to the next chapter, thats fine. Ive written the rest of the book to be accessible even if you treat backpropagation as a black box. There are, of course, points later in the book where I refer back to results from this chapter. But at those points you should still be able to understand the main conclusions, even if you dont follow all the reasoning. Before discussing backpropagation, lets warm up with a fast matrix-based algorithm to compute the output from a neural network. We actually already briefly saw this algorithm near the end of the last chapter. but I described it quickly, so its worth revisiting in detail. In particular, this is a good way of getting comfortable with the notation used in backpropagation, in a familiar context. Lets begin with a notation which lets us refer to weights in the network in an unambiguous way. Well use wl to denote the weight for the connection from the k neuron in the (l-1) layer to the j neuron in the l layer. So, for example, the diagram below shows the weight on a connection from the fourth neuron in the second layer to the second neuron in the third layer of a network: This notation is cumbersome at first, and it does take some work to master. But with a little effort youll find the notation becomes easy and natural. One quirk of the notation is the ordering of the j and k indices. You might think that it makes more sense to use j to refer to the input neuron, and k to the output neuron, not vice versa, as is actually done. Ill explain the reason for this quirk below. We use a similar notation for the networks biases and activations. Explicitly, we use blj for the bias of the j neuron in the l layer. And we use alj for the activation of the j neuron in the l layer. The following diagram shows examples of these notations in use: With these notations, the activation a j of the j neuron in the l layer is related to the activations in the (l-1) layer by the equation (compare Equation (4) begin frac nonumberend and surrounding discussion in the last chapter) begin a j sigmaleft( sumk w a k blj right), tag end where the sum is over all neurons k in the (l-1) layer. To rewrite this expression in a matrix form we define a weight matrix wl for each layer, l. The entries of the weight matrix wl are just the weights connecting to the l layer of neurons, that is, the entry in the j row and k column is wl . Similarly, for each layer l we define a bias vector . bl. You can probably guess how this works - the components of the bias vector are just the values blj, one component for each neuron in the l layer. And finally, we define an activation vector al whose components are the activations alj. The last ingredient we need to rewrite (23) begin a j sigmaleft( sumk w a k blj right) nonumberend in a matrix form is the idea of vectorizing a function such as sigma. We met vectorization briefly in the last chapter, but to recap, the idea is that we want to apply a function such as sigma to every element in a vector v. We use the obvious notation sigma(v) to denote this kind of elementwise application of a function. That is, the components of sigma(v) are just sigma(v)j sigma(vj). As an example, if we have the function f(x) x2 then the vectorized form of f has the effect begin fleft(left begin 2 3 end right right) left begin f(2) f(3) end right left begin 4 9 end right, tag end that is, the vectorized f just squares every element of the vector. With these notations in mind, Equation (23) begin a j sigmaleft( sumk w a k blj right) nonumberend can be rewritten in the beautiful and compact vectorized form begin a sigma(wl a bl). tag end This expression gives us a much more global way of thinking about how the activations in one layer relate to activations in the previous layer: we just apply the weight matrix to the activations, then add the bias vector, and finally apply the sigma function By the way, its this expression that motivates the quirk in the wl notation mentioned earlier. If we used j to index the input neuron, and k to index the output neuron, then wed need to replace the weight matrix in Equation (25) begin a sigma(wl a bl) nonumberend by the transpose of the weight matrix. Thats a small change, but annoying, and wed lose the easy simplicity of saying (and thinking) apply the weight matrix to the activations. That global view is often easier and more succinct (and involves fewer indices) than the neuron-by-neuron view weve taken to now. Think of it as a way of escaping index hell, while remaining precise about whats going on. The expression is also useful in practice, because most matrix libraries provide fast ways of implementing matrix multiplication, vector addition, and vectorization. Indeed, the code in the last chapter made implicit use of this expression to compute the behaviour of the network. When using Equation (25) begin a sigma(wl a bl) nonumberend to compute al, we compute the intermediate quantity zl equiv wl a bl along the way. This quantity turns out to be useful enough to be worth naming: we call zl the weighted input to the neurons in layer l. Well make considerable use of the weighted input zl later in the chapter. Equation (25) begin a sigma(wl a bl) nonumberend is sometimes written in terms of the weighted input, as al sigma(zl). Its also worth noting that zl has components zlj sumk wl a kblj, that is, zlj is just the weighted input to the activation function for neuron j in layer l. The goal of backpropagation is to compute the partial derivatives partial C partial w and partial C partial b of the cost function C with respect to any weight w or bias b in the network. For backpropagation to work we need to make two main assumptions about the form of the cost function. Before stating those assumptions, though, its useful to have an example cost function in mind. Well use the quadratic cost function from last chapter (c. f. Equation (6) begin C(w, b) equiv frac sumx y(x) - a2 nonumberend ). In the notation of the last section, the quadratic cost has the form begin C frac sumx y(x)-aL(x)2, tag end where: n is the total number of training examples the sum is over individual training examples, x y y(x) is the corresponding desired output L denotes the number of layers in the network and aL aL(x) is the vector of activations output from the network when x is input. Okay, so what assumptions do we need to make about our cost function, C, in order that backpropagation can be applied The first assumption we need is that the cost function can be written as an average C frac sumx Cx over cost functions Cx for individual training examples, x. This is the case for the quadratic cost function, where the cost for a single training example is Cx frac y-aL 2. This assumption will also hold true for all the other cost functions well meet in this book. The reason we need this assumption is because what backpropagation actually lets us do is compute the partial derivatives partial Cx partial w and partial Cx partial b for a single training example. We then recover partial C partial w and partial C partial b by averaging over training examples. In fact, with this assumption in mind, well suppose the training example x has been fixed, and drop the x subscript, writing the cost Cx as C. Well eventually put the x back in, but for now its a notational nuisance that is better left implicit. The second assumption we make about the cost is that it can be written as a function of the outputs from the neural network: For example, the quadratic cost function satisfies this requirement, since the quadratic cost for a single training example x may be written as begin C frac y-aL2 frac sumj (yj-aLj)2, tag end and thus is a function of the output activations. Of course, this cost function also depends on the desired output y, and you may wonder why were not regarding the cost also as a function of y. Remember, though, that the input training example x is fixed, and so the output y is also a fixed parameter. In particular, its not something we can modify by changing the weights and biases in any way, i. e. its not something which the neural network learns. And so it makes sense to regard C as a function of the output activations aL alone, with y merely a parameter that helps define that function. The backpropagation algorithm is based on common linear algebraic operations - things like vector addition, multiplying a vector by a matrix, and so on. But one of the operations is a little less commonly used. In particular, suppose s and t are two vectors of the same dimension. Then we use s odot t to denote the elementwise product of the two vectors. Thus the components of s odot t are just (s odot t)j sj tj. As an example, begin leftbegin 1 2 end right odot leftbegin 3 4end right left begin 1 3 2 4 end right left begin 3 8 end right. tag end This kind of elementwise multiplication is sometimes called the Hadamard product or Schur product . Well refer to it as the Hadamard product. Good matrix libraries usually provide fast implementations of the Hadamard product, and that comes in handy when implementing backpropagation. Backpropagation is about understanding how changing the weights and biases in a network changes the cost function. Ultimately, this means computing the partial derivatives partial C partial wl and partial C partial blj. But to compute those, we first introduce an intermediate quantity, deltalj, which we call the error in the j neuron in the l layer. Backpropagation will give us a procedure to compute the error deltalj, and then will relate deltalj to partial C partial wl and partial C partial blj. To understand how the error is defined, imagine there is a demon in our neural network: The demon sits at the j neuron in layer l. As the input to the neuron comes in, the demon messes with the neurons operation. It adds a little change Delta zlj to the neurons weighted input, so that instead of outputting sigma(zlj), the neuron instead outputs sigma(zljDelta zlj). This change propagates through later layers in the network, finally causing the overall cost to change by an amount frac Delta zlj. Now, this demon is a good demon, and is trying to help you improve the cost, i. e. theyre trying to find a Delta zlj which makes the cost smaller. Suppose frac has a large value (either positive or negative). Then the demon can lower the cost quite a bit by choosing Delta zlj to have the opposite sign to frac . By contrast, if frac is close to zero, then the demon cant improve the cost much at all by perturbing the weighted input zlj. So far as the demon can tell, the neuron is already pretty near optimal This is only the case for small changes Delta zlj, of course. Well assume that the demon is constrained to make such small changes. And so theres a heuristic sense in which frac is a measure of the error in the neuron. Motivated by this story, we define the error deltalj of neuron j in layer l by begin deltalj equiv frac . tag end As per our usual conventions, we use deltal to denote the vector of errors associated with layer l. Backpropagation will give us a way of computing deltal for every layer, and then relating those errors to the quantities of real interest, partial C partial wl and partial C partial blj. You might wonder why the demon is changing the weighted input zlj. Surely itd be more natural to imagine the demon changing the output activation alj, with the result that wed be using frac as our measure of error. In fact, if you do this things work out quite similarly to the discussion below. But it turns out to make the presentation of backpropagation a little more algebraically complicated. So well stick with deltalj frac as our measure of error In classification problems like MNIST the term error is sometimes used to mean the classification failure rate. Per esempio. if the neural net correctly classifies 96.0 percent of the digits, then the error is 4.0 percent. Obviously, this has quite a different meaning from our delta vectors. In practice, you shouldnt have trouble telling which meaning is intended in any given usage. Plan of attack: Backpropagation is based around four fundamental equations. Together, those equations give us a way of computing both the error deltal and the gradient of the cost function. I state the four equations below. Be warned, though: you shouldnt expect to instantaneously assimilate the equations. Such an expectation will lead to disappointment. In fact, the backpropagation equations are so rich that understanding them well requires considerable time and patience as you gradually delve deeper into the equations. The good news is that such patience is repaid many times over. And so the discussion in this section is merely a beginning, helping you on the way to a thorough understanding of the equations. Heres a preview of the ways well delve more deeply into the equations later in the chapter: Ill give a short proof of the equations. which helps explain why they are true well restate the equations in algorithmic form as pseudocode, and see how the pseudocode can be implemented as real, running Python code and, in the final section of the chapter. well develop an intuitive picture of what the backpropagation equations mean, and how someone might discover them from scratch. Along the way well return repeatedly to the four fundamental equations, and as you deepen your understanding those equations will come to seem comfortable and, perhaps, even beautiful and natural. An equation for the error in the output layer, deltaL: The components of deltaL are given by begin deltaLj frac sigma(zLj). tag end This is a very natural expression. The first term on the right, partial C partial aLj, just measures how fast the cost is changing as a function of the j output activation. If, for example, C doesnt depend much on a particular output neuron, j, then deltaLj will be small, which is what wed expect. The second term on the right, sigma(zLj), measures how fast the activation function sigma is changing at zLj. Notice that everything in (BP1) begin deltaLj frac sigma(zLj) nonumberend is easily computed. In particular, we compute zLj while computing the behaviour of the network, and its only a small additional overhead to compute sigma(zLj). The exact form of partial C partial aLj will, of course, depend on the form of the cost function. However, provided the cost function is known there should be little trouble computing partial C partial aLj. For example, if were using the quadratic cost function then C frac sumj (yj-aLj)2, and so partial C partial aLj (ajL-yj), which obviously is easily computable. Equation (BP1) begin deltaLj frac sigma(zLj) nonumberend is a componentwise expression for deltaL. Its a perfectly good expression, but not the matrix-based form we want for backpropagation. However, its easy to rewrite the equation in a matrix-based form, as begin deltaL nablaa C odot sigma(zL). tag end Here, nablaa C is defined to be a vector whose components are the partial derivatives partial C partial aLj. You can think of nablaa C as expressing the rate of change of C with respect to the output activations. Its easy to see that Equations (BP1a) begin deltaL nablaa C odot sigma(zL) nonumberend and (BP1) begin deltaLj frac sigma(zLj) nonumberend are equivalent, and for that reason from now on well use (BP1) begin deltaLj frac sigma(zLj) nonumberend interchangeably to refer to both equations. As an example, in the case of the quadratic cost we have nablaa C (aL-y), and so the fully matrix-based form of (BP1) begin deltaLj frac sigma(zLj) nonumberend becomes begin deltaL (aL-y) odot sigma(zL). tag end As you can see, everything in this expression has a nice vector form, and is easily computed using a library such as Numpy. An equation for the error deltal in terms of the error in the next layer, delta : In particular begin deltal ((w )T delta ) odot sigma(zl), tag end where (w )T is the transpose of the weight matrix w for the (l1) layer. This equation appears complicated, but each element has a nice interpretation. Suppose we know the error delta at the l1 layer. When we apply the transpose weight matrix, (w )T, we can think intuitively of this as moving the error backward through the network, giving us some sort of measure of the error at the output of the l layer. We then take the Hadamard product odot sigma(zl). This moves the error backward through the activation function in layer l, giving us the error deltal in the weighted input to layer l. By combining (BP2) begin deltal ((w )T delta ) odot sigma(zl) nonumberend with (BP1) begin deltaLj frac sigma(zLj) nonumberend we can compute the error deltal for any layer in the network. We start by using (BP1) begin deltaLj frac sigma(zLj) nonumberend to compute deltaL, then apply Equation (BP2) begin deltal ((w )T delta ) odot sigma(zl) nonumberend to compute delta , then Equation (BP2) begin deltal ((w )T delta ) odot sigma(zl) nonumberend again to compute delta , and so on, all the way back through the network. An equation for the rate of change of the cost with respect to any bias in the network: In particular: begin frac deltalj. tag end That is, the error deltalj is exactly equal to the rate of change partial C partial blj. This is great news, since (BP1) begin deltaLj frac sigma(zLj) nonumberend and (BP2) begin deltal ((w )T delta ) odot sigma(zl) nonumberend have already told us how to compute deltalj. We can rewrite (BP3) begin frac deltalj nonumberend in shorthand as begin frac delta, tag end where it is understood that delta is being evaluated at the same neuron as the bias b. An equation for the rate of change of the cost with respect to any weight in the network: In particular: begin frac a k deltalj. tag end This tells us how to compute the partial derivatives partial C partial wl in terms of the quantities deltal and a , which we already know how to compute. The equation can be rewritten in a less index-heavy notation as begin frac a delta , tag end where its understood that a is the activation of the neuron input to the weight w, and delta is the error of the neuron output from the weight w. Zooming in to look at just the weight w, and the two neurons connected by that weight, we can depict this as: A nice consequence of Equation (32) begin frac a delta nonumberend is that when the activation a is small, a approx 0, the gradient term partial C partial w will also tend to be small. In this case, well say the weight learns slowly . meaning that its not changing much during gradient descent. In other words, one consequence of (BP4) begin frac a k deltalj nonumberend is that weights output from low-activation neurons learn slowly. There are other insights along these lines which can be obtained from (BP1) begin deltaLj frac sigma(zLj) nonumberend - (BP4) begin frac a k deltalj nonumberend . Lets start by looking at the output layer. Consider the term sigma(zLj) in (BP1) begin deltaLj frac sigma(zLj) nonumberend . Recall from the graph of the sigmoid function in the last chapter that the sigma function becomes very flat when sigma(zLj) is approximately 0 or 1. When this occurs we will have sigma(zLj) approx 0. And so the lesson is that a weight in the final layer will learn slowly if the output neuron is either low activation (approx 0) or high activation (approx 1). In this case its common to say the output neuron has saturated and, as a result, the weight has stopped learning (or is learning slowly). Similar remarks hold also for the biases of output neuron. We can obtain similar insights for earlier layers. In particular, note the sigma(zl) term in (BP2) begin deltal ((w )T delta ) odot sigma(zl) nonumberend . This means that deltalj is likely to get small if the neuron is near saturation. And this, in turn, means that any weights input to a saturated neuron will learn slowly This reasoning wont hold if T delta has large enough entries to compensate for the smallness of sigma(zlj). But Im speaking of the general tendency. Summing up, weve learnt that a weight will learn slowly if either the input neuron is low-activation, or if the output neuron has saturated, i. e. is either high - or low-activation. None of these observations is too greatly surprising. Still, they help improve our mental model of whats going on as a neural network learns. Furthermore, we can turn this type of reasoning around. The four fundamental equations turn out to hold for any activation function, not just the standard sigmoid function (thats because, as well see in a moment, the proofs dont use any special properties of sigma). And so we can use these equations to design activation functions which have particular desired learning properties. As an example to give you the idea, suppose we were to choose a (non-sigmoid) activation function sigma so that sigma is always positive, and never gets close to zero. That would prevent the slow-down of learning that occurs when ordinary sigmoid neurons saturate. Later in the book well see examples where this kind of modification is made to the activation function. Keeping the four equations (BP1) begin deltaLj frac sigma(zLj) nonumberend - (BP4) begin frac a k deltalj nonumberend in mind can help explain why such modifications are tried, and what impact they can have. Alternate presentation of the equations of backpropagation: Ive stated the equations of backpropagation (notably (BP1) begin deltaLj frac sigma(zLj) nonumberend and (BP2) begin deltal ((w )T delta ) odot sigma(zl) nonumberend ) using the Hadamard product. This presentation may be disconcerting if youre unused to the Hadamard product. Theres an alternative approach, based on conventional matrix multiplication, which some readers may find enlightening. (1) Show that (BP1) begin deltaLj frac sigma(zLj) nonumberend may be rewritten as begin deltaL Sigma(zL) nablaa C, tag end where Sigma(zL) is a square matrix whose diagonal entries are the values sigma(zLj), and whose off-diagonal entries are zero. Note that this matrix acts on nablaa C by conventional matrix multiplication. (2) Show that (BP2) begin deltal ((w )T delta ) odot sigma(zl) nonumberend may be rewritten as begin deltal Sigma(zl) (w )T delta . tag end (3) By combining observations (1) and (2) show that begin deltal Sigma(zl) (w )T ldots Sigma(z ) (wL)T Sigma(zL) nablaa C tag end For readers comfortable with matrix multiplication this equation may be easier to understand than (BP1) begin deltaLj frac sigma(zLj) nonumberend and (BP2) begin deltal ((w )T delta ) odot sigma(zl) nonumberend . The reason Ive focused on (BP1) begin deltaLj frac sigma(zLj) nonumberend and (BP2) begin deltal ((w )T delta ) odot sigma(zl) nonumberend is because that approach turns out to be faster to implement numerically. Well now prove the four fundamental equations (BP1) begin deltaLj frac sigma(zLj) nonumberend - (BP4) begin frac a k deltalj nonumberend . All four are consequences of the chain rule from multivariable calculus. If youre comfortable with the chain rule, then I strongly encourage you to attempt the derivation yourself before reading on. Lets begin with Equation (BP1) begin deltaLj frac sigma(zLj) nonumberend . which gives an expression for the output error, deltaL. To prove this equation, recall that by definition begin deltaLj frac . tag end Applying the chain rule, we can re-express the partial derivative above in terms of partial derivatives with respect to the output activations, begin deltaLj sumk frac frac , tag end where the sum is over all neurons k in the output layer. Of course, the output activation aLk of the k neuron depends only on the weighted input zLj for the j neuron when k j. And so partial aLk partial zLj vanishes when k neq j. As a result we can simplify the previous equation to begin deltaLj frac frac . tag end Recalling that aLj sigma(zLj) the second term on the right can be written as sigma(zLj), and the equation becomes begin deltaLj frac sigma(zLj), tag end which is just (BP1) begin deltaLj frac sigma(zLj) nonumberend . in component form. Next, well prove (BP2) begin deltal ((w )T delta ) odot sigma(zl) nonumberend . which gives an equation for the error deltal in terms of the error in the next layer, delta . To do this, we want to rewrite deltalj partial C partial zlj in terms of delta k partial C partial z k. We can do this using the chain rule, begin deltalj frac tag sumk frac k frac k tag sumk frac k delta k, tag end where in the last line we have interchanged the two terms on the right-hand side, and substituted the definition of delta k. To evaluate the first term on the last line, note that begin z k sumj w alj b k sumj w sigma(zlj) b k. tag end Differentiating, we obtain begin frac k w sigma(zlj). tag end Substituting back into (42) begin sumk frac k delta k nonumberend we obtain begin deltalj sumk w delta k sigma(zlj). tag end This is just (BP2) begin deltal ((w )T delta ) odot sigma(zl) nonumberend written in component form. The final two equations we want to prove are (BP3) begin frac deltalj nonumberend and (BP4) begin frac a k deltalj nonumberend . These also follow from the chain rule, in a manner similar to the proofs of the two equations above. I leave them to you as an exercise. That completes the proof of the four fundamental equations of backpropagation. The proof may seem complicated. But its really just the outcome of carefully applying the chain rule. A little less succinctly, we can think of backpropagation as a way of computing the gradient of the cost function by systematically applying the chain rule from multi-variable calculus. Thats all there really is to backpropagation - the rest is details. The backpropagation equations provide us with a way of computing the gradient of the cost function. Lets explicitly write this out in the form of an algorithm: Input x: Set the corresponding activation a for the input layer. Feedforward: For each l 2, 3, ldots, L compute z wl a bl and a sigma(z ). Output error deltaL: Compute the vector delta nablaa C odot sigma(zL). Backpropagate the error: For each l L-1, L-2, ldots, 2 compute delta ((w )T delta ) odot sigma(z ). Examining the algorithm you can see why its called back propagation. We compute the error vectors deltal backward, starting from the final layer. It may seem peculiar that were going through the network backward. But if you think about the proof of backpropagation, the backward movement is a consequence of the fact that the cost is a function of outputs from the network. To understand how the cost varies with earlier weights and biases we need to repeatedly apply the chain rule, working backward through the layers to obtain usable expressions. Backpropagation with a single modified neuron Suppose we modify a single neuron in a feedforward network so that the output from the neuron is given by f(sumj wj xj b), where f is some function other than the sigmoid. How should we modify the backpropagation algorithm in this case Backpropagation with linear neurons Suppose we replace the usual non-linear sigma function with sigma(z) z throughout the network. Rewrite the backpropagation algorithm for this case. As Ive described it above, the backpropagation algorithm computes the gradient of the cost function for a single training example, C Cx. In practice, its common to combine backpropagation with a learning algorithm such as stochastic gradient descent, in which we compute the gradient for many training examples. In particular, given a mini-batch of m training examples, the following algorithm applies a gradient descent learning step based on that mini-batch: Input a set of training examples For each training example x: Set the corresponding input activation a , and perform the following steps: Output error delta : Compute the vector delta nablaa Cx odot sigma(z ). Backpropagate the error: For each l L-1, L-2, ldots, 2 compute delta ((w )T delta ) odot sigma(z ). Gradient descent: For each l L, L-1, ldots, 2 update the weights according to the rule wl rightarrow wl-frac sumx delta (a )T, and the biases according to the rule bl rightarrow bl-frac sumx delta . Of course, to implement stochastic gradient descent in practice you also need an outer loop generating mini-batches of training examples, and an outer loop stepping through multiple epochs of training. Ive omitted those for simplicity. Having understood backpropagation in the abstract, we can now understand the code used in the last chapter to implement backpropagation. Recall from that chapter that the code was contained in the updateminibatch and backprop methods of the Network class. The code for these methods is a direct translation of the algorithm described above. In particular, the updateminibatch method updates the Network s weights and biases by computing the gradient for the current minibatch of training examples: Most of the work is done by the line deltanablab, deltanablaw self. backprop(x, y) which uses the backprop method to figure out the partial derivatives partial Cx partial blj and partial Cx partial wl . The backprop method follows the algorithm in the last section closely. There is one small change - we use a slightly different approach to indexing the layers. This change is made to take advantage of a feature of Python, namely the use of negative list indices to count backward from the end of a list, so, e. g. l-3 is the third last entry in a list l . The code for backprop is below, together with a few helper functions, which are used to compute the sigma function, the derivative sigma, and the derivative of the cost function. With these inclusions you should be able to understand the code in a self-contained way. If somethings tripping you up, you may find it helpful to consult the original description (and complete listing) of the code. Fully matrix-based approach to backpropagation over a mini-batch Our implementation of stochastic gradient descent loops over training examples in a mini-batch. Its possible to modify the backpropagation algorithm so that it computes the gradients for all training examples in a mini-batch simultaneously. The idea is that instead of beginning with a single input vector, x, we can begin with a matrix X x1 x2 ldots xm whose columns are the vectors in the mini-batch. We forward-propagate by multiplying by the weight matrices, adding a suitable matrix for the bias terms, and applying the sigmoid function everywhere. We backpropagate along similar lines. Explicitly write out pseudocode for this approach to the backpropagation algorithm. Modify network. py so that it uses this fully matrix-based approach. The advantage of this approach is that it takes full advantage of modern libraries for linear algebra. As a result it can be quite a bit faster than looping over the mini-batch. (On my laptop, for example, the speedup is about a factor of two when run on MNIST classification problems like those we considered in the last chapter.) In practice, all serious libraries for backpropagation use this fully matrix-based approach or some variant. In what sense is backpropagation a fast algorithm To answer this question, lets consider another approach to computing the gradient. Imagine its the early days of neural networks research. Maybe its the 1950s or 1960s, and youre the first person in the world to think of using gradient descent to learn But to make the idea work you need a way of computing the gradient of the cost function. You think back to your knowledge of calculus, and decide to see if you can use the chain rule to compute the gradient. But after playing around a bit, the algebra looks complicated, and you get discouraged. So you try to find another approach. You decide to regard the cost as a function of the weights C C(w) alone (well get back to the biases in a moment). You number the weights w1, w2, ldots, and want to compute partial C partial wj for some particular weight wj. An obvious way of doing that is to use the approximation begin frac approx frac , tag end where epsilon 0 is a small positive number, and ej is the unit vector in the j direction. In other words, we can estimate partial C partial wj by computing the cost C for two slightly different values of wj, and then applying Equation (46) begin frac approx frac nonumberend . The same idea will let us compute the partial derivatives partial C partial b with respect to the biases. This approach looks very promising. Its simple conceptually, and extremely easy to implement, using just a few lines of code. Certainly, it looks much more promising than the idea of using the chain rule to compute the gradient Unfortunately, while this approach appears promising, when you implement the code it turns out to be extremely slow. To understand why, imagine we have a million weights in our network. Then for each distinct weight wj we need to compute C(wepsilon ej) in order to compute partial C partial wj. That means that to compute the gradient we need to compute the cost function a million different times, requiring a million forward passes through the network (per training example). We need to compute C(w) as well, so thats a total of a million and one passes through the network. Whats clever about backpropagation is that it enables us to simultaneously compute all the partial derivatives partial C partial wj using just one forward pass through the network, followed by one backward pass through the network. Roughly speaking, the computational cost of the backward pass is about the same as the forward pass This should be plausible, but it requires some analysis to make a careful statement. Its plausible because the dominant computational cost in the forward pass is multiplying by the weight matrices, while in the backward pass its multiplying by the transposes of the weight matrices. These operations obviously have similar computational cost. And so the total cost of backpropagation is roughly the same as making just two forward passes through the network. Compare that to the million and one forward passes we needed for the approach based on (46) begin frac approx frac nonumberend . And so even though backpropagation appears superficially more complex than the approach based on (46) begin frac approx frac nonumberend . its actually much, much faster. This speedup was first fully appreciated in 1986, and it greatly expanded the range of problems that neural networks could solve. That, in turn, caused a rush of people using neural networks. Of course, backpropagation is not a panacea. Even in the late 1980s people ran up against limits, especially when attempting to use backpropagation to train deep neural networks, i. e. networks with many hidden layers. Later in the book well see how modern computers and some clever new ideas now make it possible to use backpropagation to train such deep neural networks. As Ive explained it, backpropagation presents two mysteries. First, whats the algorithm really doing Weve developed a picture of the error being backpropagated from the output. But can we go any deeper, and build up more intuition about what is going on when we do all these matrix and vector multiplications The second mystery is how someone could ever have discovered backpropagation in the first place Its one thing to follow the steps in an algorithm, or even to follow the proof that the algorithm works. But that doesnt mean you understand the problem so well that you could have discovered the algorithm in the first place. Is there a plausible line of reasoning that could have led you to discover the backpropagation algorithm In this section Ill address both these mysteries. To improve our intuition about what the algorithm is doing, lets imagine that weve made a small change Delta wl to some weight in the network, wl : That change in weight will cause a change in the output activation from the corresponding neuron: That, in turn, will cause a change in all the activations in the next layer: Those changes will in turn cause changes in the next layer, and then the next, and so on all the way through to causing a change in the final layer, and then in the cost function: The change Delta C in the cost is related to the change Delta wl in the weight by the equation begin Delta C approx frac Delta wl . tag end This suggests that a possible approach to computing frac is to carefully track how a small change in wl propagates to cause a small change in C. If we can do that, being careful to express everything along the way in terms of easily computable quantities, then we should be able to compute partial C partial wl . Lets try to carry this out. The change Delta wl causes a small change Delta a j in the activation of the j neuron in the l layer. This change is given by begin Delta alj approx frac Delta wl . tag end The change in activation Delta al will cause changes in all the activations in the next layer, i. e. the (l1) layer. Well concentrate on the way just a single one of those activations is affected, say a q, In fact, itll cause the following change: begin Delta a q approx frac q Delta alj. tag end Substituting in the expression from Equation (48) begin Delta alj approx frac Delta wl nonumberend . we get: begin Delta a q approx frac q frac Delta wl . tag end Of course, the change Delta a q will, in turn, cause changes in the activations in the next layer. In fact, we can imagine a path all the way through the network from wl to C, with each change in activation causing a change in the next activation, and, finally, a change in the cost at the output. If the path goes through activations alj, a q, ldots, a n, aLm then the resulting expression is begin Delta C approx frac frac n frac n p ldots frac q frac Delta wl , tag end that is, weve picked up a partial a partial a type term for each additional neuron weve passed through, as well as the partial Cpartial aLm term at the end. This represents the change in C due to changes in the activations along this particular path through the network. Of course, theres many paths by which a change in wl can propagate to affect the cost, and weve been considering just a single path. To compute the total change in C it is plausible that we should sum over all the possible paths between the weight and the final cost, i. e. begin Delta C approx sum frac frac n frac n p ldots frac q frac Delta wl , tag end where weve summed over all possible choices for the intermediate neurons along the path. Comparing with (47) begin Delta C approx frac Delta wl nonumberend we see that begin frac sum frac frac n frac n p ldots frac q frac . tag end Now, Equation (53) begin frac sum frac frac n frac n p ldots frac q frac nonumberend looks complicated. However, it has a nice intuitive interpretation. Were computing the rate of change of C with respect to a weight in the network. What the equation tells us is that every edge between two neurons in the network is associated with a rate factor which is just the partial derivative of one neurons activation with respect to the other neurons activation. The edge from the first weight to the first neuron has a rate factor partial a j partial wl . The rate factor for a path is just the product of the rate factors along the path. And the total rate of change partial C partial wl is just the sum of the rate factors of all paths from the initial weight to the final cost. This procedure is illustrated here, for a single path: What Ive been providing up to now is a heuristic argument, a way of thinking about whats going on when you perturb a weight in a network. Let me sketch out a line of thinking you could use to further develop this argument. First, you could derive explicit expressions for all the individual partial derivatives in Equation (53) begin frac sum frac frac n frac n p ldots frac q frac nonumberend . Thats easy to do with a bit of calculus. Having done that, you could then try to figure out how to write all the sums over indices as matrix multiplications. This turns out to be tedious, and requires some persistence, but not extraordinary insight. After doing all this, and then simplifying as much as possible, what you discover is that you end up with exactly the backpropagation algorithm And so you can think of the backpropagation algorithm as providing a way of computing the sum over the rate factor for all these paths. Or, to put it slightly differently, the backpropagation algorithm is a clever way of keeping track of small perturbations to the weights (and biases) as they propagate through the network, reach the output, and then affect the cost. Now, Im not going to work through all this here. Its messy and requires considerable care to work through all the details. If youre up for a challenge, you may enjoy attempting it. And even if not, I hope this line of thinking gives you some insight into what backpropagation is accomplishing. What about the other mystery - how backpropagation could have been discovered in the first place In fact, if you follow the approach I just sketched you will discover a proof of backpropagation. Unfortunately, the proof is quite a bit longer and more complicated than the one I described earlier in this chapter. So how was that short (but more mysterious) proof discovered What you find when you write out all the details of the long proof is that, after the fact, there are several obvious simplifications staring you in the face. You make those simplifications, get a shorter proof, and write that out. And then several more obvious simplifications jump out at you. So you repeat again. The result after a few iterations is the proof we saw earlier There is one clever step required. In Equation (53) begin frac sum frac frac n frac n p ldots frac q frac nonumberend the intermediate variables are activations like aq . The clever idea is to switch to using weighted inputs, like z q, as the intermediate variables. If you dont have this idea, and instead continue using the activations a q, the proof you obtain turns out to be slightly more complex than the proof given earlier in the chapter. - short, but somewhat obscure, because all the signposts to its construction have been removed I am, of course, asking you to trust me on this, but there really is no great mystery to the origin of the earlier proof. Its just a lot of hard work simplifying the proof Ive sketched in this section. In academic work, please cite this book as: Michael A. Nielsen, Neural Networks and Deep Learning, Determination Press, 2015 This work is licensed under a Creative Commons Attribution-NonCommercial 3.0 Unported License. This means youre free to copy, share, and build on this book, but not to sell it. If youre interested in commercial use, please contact me. Last update: Thu Jan 19 06:09:48 2017
Comments
Post a Comment