Aria

あなたには僕が見えるが?


  • Home

  • Tags

  • Categories

  • Archives

  • Search

[ITOC-ch5] REDUCIBILITY

Posted on 2018-05-31 | In Introduction to the Theory of Computation |

[ITOC-ch5] REDUCIBILITY

If A reduces to B

  • we can use a solution to B to solve A
  • solving A is no harder than solving B

$ A \longrightarrow B \ (decidable) \Rightarrow A \ (decidable) $

$ A \ (undecidable) \longrightarrow B \Rightarrow B \ (undecidable) $


5.1 UNDECIDABLE PROBLEMS FROM LANGUAGE THEORY

General Idea

  • Contradiction on $ A \longrightarrow B \ (decidable) \Rightarrow A \ (decidable) $

  • If we have a TM R that decides $ HALT_{TM} $ ( or sth ).

  • Then we can use R to construct a TM S that decides $ A_{TM} $
  • thus obtain a contradiction.
Read more »

[ITOC-ch4] DECIDABILITY

Posted on 2018-05-25 | In Introduction to the Theory of Computation |

[ITOC-ch4] DECIDABILITY

Recall:

Recognizer

  • if $ w∈L $, than M must accept w.
  • if $ w∉L $, than M may loop when input w.

Decider

  • if $ w∈L $, than M must accept w.
  • if $ w∉L $, than M must reject w.


4.1 DECIDABLE LANGUAGES

DECIDABLE PROBLEMS CONCERNING REGULAR LANGUAGES

$ A_{DFA} = \{ \ <B,w> \ | \ B \ is \ a \ DFA \ that \ accepts \ input \ string \ w \} $.

Proof that it’s a decidable language by designing a decider

  • M = “On input <B, w>, where B is a DFA and w is a string:
    • Simulate B on input w.
    • If the simulation ends in an accept state, accept . If it ends in a nonaccepting state, reject .”
Read more »

[ITOC-ch3] THE CHURCH—TURING THESIS

Posted on 2018-05-19 | In Introduction to the Theory of Computation |

[ITOC-ch3] THE CHURCH—TURING THESIS

3.1 TURING MACHINES

unrestricted access to unlimited memory


The Turing machine model uses an infinite tape as its unlimited memory.

It has a tape head that can read and write symbols and move both left and right on the tape.

The outputs accept and reject are obtained by entering designated accepting and rejecting states. If it doesn’t enter an accepting or a rejecting state, it will go on forever, never halting.


DEF

Read more »

[ITOC-ch2] CONTEXT-FREE LANGUAGES

Posted on 2018-05-19 | In Introduction to the Theory of Computation |

[ITOC-ch2] CONTEXT-FREE LANGUAGES

In this chapter we present context-free grammars to describe certain features that have a recursive structure.

help us organize and understand relationships of terms.

Application: compilers and interpreters

  • Most compilers and interpreters contain a parser that extracts the meaning of a program before generating the compiled code or performing the interpreted execution.
Read more »

[ITOC-ch1] REGULAR LANGUAGES

Posted on 2018-05-10 | In Introduction to the Theory of Computation |

[ITOC-ch1] REGULAR LANGUAGES

1.1 FINITE AUTOMATA

Deterministic Finite Automata

  • DEF
  • Every state of a DFA always has exactly one exiting transition arrow for each symbol in the alphabet.


Read more »

[SICP-ch4] Metalinguistic Abstraction

Posted on 2018-04-27 | In Structure and Interpretation of Computer Programs |

[SICP-ch4] Metalinguistic Abstraction

establishing new languages and implement these languages by constructing evaluators.

(极简版)


evaluators ( interpreter )

  • a procedure that, when applied to an expression of the language, performs the actions required to evaluate that expression.


4.1 The Metacircular Evaluator

metacircular

  • An evaluator that is written in the same language that it evaluates
Read more »

[SICP-ch3] Modularity, Objects, and State

Posted on 2018-04-18 | In Structure and Interpretation of Computer Programs |

ch-3 Modularity, Objects, and State

3.1 Assignment and Local State

System decomposed into computational objects with time-varying state

language must provide an assignment operator to enable us to change the value


3.1.1 Local State Variables

model the situation of withdrawing money from a bank account

1
2
3
4
5
6
(define (make-withdraw balance)
	(lambda (amount)
		(if (>= balance amount)
			(begin (set! balance (- balance amount))
				   balance)
			"Insufficient funds")))
Read more »

[NN&DL-ch3] Improving the way neural networks learn

Posted on 2018-04-15 | In Neural Networks and Deep Learning |

ch-3 Improving the way neural networks learn

The techniques we’ll develop in this chapter include

  • a better choice of cost function, the cross-entropy cost function;
  • four so-called “regularization” methods ( L1 and L2 regularization, dropout, and artificial expansion of the training data ), which make our networks better at generalizing beyond the training data;
  • a better method for initializing the weights in the network;
  • a set of heuristics to help choose good hyper-parameters ;
Read more »

[NN&DL-ch2] How the backpropagation algorithm works

Posted on 2018-04-15 | In Neural Networks and Deep Learning |

ch-2 How the backpropagation algorithm works

discuss how to compute the gradient ( $\nabla C$ ) of the cost function

Heart of backpropagation

  • an expression for the partial derivative $∂ C /∂ w$ of the cost function C with respect to any weight w ( or bias b ) in the network
  • tells us how quickly the cost changes when
    we change the weights and biases
Read more »

[NN&DL-ch1] Using neural nets to recognize handwritten digits

Posted on 2018-04-13 | In Neural Networks and Deep Learning |

ch-1 Using neural nets to recognize handwritten digits

Perceptrons

  • a type of artificial neuron
  • a device that makes decisions by weighing up evidence

Read more »
1234
Aria

Aria

This guy is so lazy that he left nothing

34 posts
8 categories
8 tags
Links
  • ダーリンのblog
© 2017 — 2019 Aria
Powered by Hexo
|
Theme — NexT.Muse v5.1.3