Posts

Showing posts from 2023

Lesson 5.3. Concept Learning as Search

In the context of concept learning, the process involves searching through a hypothesis space to find the hypothesis that best fits the training examples . Let's use the house-buying scenario to illustrate this concept:   Concept Learning in the House-Buying Scenario In the house-buying scenario, Jordan decides whether to buy a house based on certain features: 1. Location: Urban (A), Suburban (B), Rural (C) 2. Budget: Within budget (Yes), Not within budget (No) 3. Size: Small, Medium, Large 4. Garden: Has garden (Yes), No garden (No) 5. Schools Nearby: Good schools nearby (Yes), No good schools nearby (No) 6. Public Transport: Convenient public transport (Yes), Inconvenient public transport (No)   Instance Space (X) - The instance space X consists of all possible combinations of these features. With the values given for each feature, the total number of distinct instances can be calculated as follows: 3*2*3*2*2*2 = 144   (3 options for Location, 2 for Budget

Lesson 5.2. Inductive Learning Hypothesis

The Inductive Learning Hypothesis is a fundamental principle in machine learning and artificial intelligence. It underpins many supervised learning algorithms and models.   Core Idea The Inductive Learning Hypothesis posits that if a hypothesis (or model) performs well on a sufficiently large and representative set of training examples, it will also likely perform well on unseen examples that it wasn't trained on . This hypothesis is based on the assumption that the patterns or rules learned from the training data will generalize to new data from the same underlying distribution.  Key Components 1. Approximating the Target Function:  In machine learning, the target function is the actual relationship between input variables and the output variable in the dataset. This function is unknown and what the learning algorithm aims to approximate. 2. Training Examples:  These are the instances of data that the model is trained on. They consist of both the input features and the c

Lesson5.1. Concept Learning

What is Concept Learning? Input Training Dataset: Positive and negative samples. Eg. Images of cat and not cat. Output : Identify Category. Eg. Binary Classification of Cat or not cat Representation of output : Boolean.  {1:cat, 0:otherwise} From specific examples of cat and not cat images, it understands the concept of cat or not cat. Dataset Consider a dataset to a real-world scenario where it predicts whether a person, let's call them "Jordan", will buy a house based on various factors (features). Scenario:  Will Jordan Buy a House? Features Explanation: Feature 1 (Location): The type of location (A - Urban, B - Suburban, C - Rural). Feature 2 (Budget): Is the house within Jordan's budget? (Yes, No). Feature 3 (Size): What is the size of the house adequate? (small, medium, large). Feature 4 (Garden): Does the house have a garden? (Yes, No). Feature 5 (Schools Nearby): Are there good schools nearby? (Yes, No). Feature 6 (Public Transport): Is there convenient publi

Lesson4.5 The Final Design - Designing A Learning System

 Central components of Learning System 1. **Performance System**: Input : New Problem Output : Trace of the solution - play against itself to get the sequence of moves - Uses the learned evaluation function, denoted as V', to select its next move. - As the evaluation function becomes more accurate, the quality of the AI’s move selection should improve. 2. **Critic**: Input : Trace of game Output : Finds (b,Vtrain(b)) i.e. estimates Vtrain(b) = V'(successor(b)) 3. **Generalizer**: Input : Training examples or dataset Output : Estimate of weights - the machine learning model    - Takes the training examples provided by the Critic and creates a generalized hypothesis for the target function.    - Applies an algorithm, like the LMS (Least Mean Squares) algorithm, to generalize from specific examples to a broader understanding that can be applied to unseen game states. 4. **Experiment Generator**: Input : The linear function got as output from generalizer Output : A new board state

Lesson4.4 Choosing a Function Approximation Algorithm - Designing A Learning System -

The Premise The checkers problem(Problem X) is converted to another problem(Problem Y) and we state that solving Y would help us solve X. Here the Problem Y is a linear equation of x's and w's and the machine(learner) needs to learn the unknown w's. From lesson 4.3, Our Choice Board features: •          x1(b) — number of black pieces on board b •          x2(b) — number of white pieces on b •          x3(b) — number of black kings on b •          x4(b) — number of white kings on b •          x5(b) — number of white pieces threatened by black(can captured in next move) •          x6(b) — number of black pieces threatened by white Linear Combination V' = w 0  + w 1  · x 1 (b) + w 2  · x 2 (b) + w 3  · x 3 (b) + w 4  · x 4 (b) +w 5  · x 5 (b) + w 6  · x 6 (b) V' - learners current approximation of V -  It represents the AI's method for evaluating a board state. It's like the AI's strategy or set of rules to determine how good a position is. Structure of tra

Lesson4.3 - Choosing a Representation for the Target Function - Designing A Learning System -

In the previous section, we decided to represent the target function for the checkers problem based on a value for the board state. We saw that it is computationally expensive to derive this value and so we try to approximate the value of the board state. Choosing the right representation for the target function (V(b)) in the checkers problem is crucial for its practical implementation. Here are some key factors to consider: 1. Expressiveness: The representation should be able to capture the important features of the game state that influence its value (e.g., piece count, position, potential captures, king status). A more expressive representation allows for better approximations of V(b) but might be computationally expensive. 2. Simplicity: A simpler representation is easier to learn and computationally efficient. However, it might not capture all the relevant information, leading to less accurate approximations of V(b). 3. Training data and learning algorithm: The representatio