[Jiejie Send Book Phase 2] Python Machine Learning: Based on PyTorch and Scikit-Learn

foreword

In recent years, machine learning methods have been widely used in industries such as healthcare, robotics, biology, physics, mass consumption, and Internet services due to their ability to understand massive amounts of data and make autonomous decisions. Since the AlexNet model was proposed in the ImageNet competition in 2012, machine learning and deep learning have developed rapidly, achieving milestones one after another, profoundly affecting industry, academia and people's lives.

Today, machine learning, deep learning, and artificial intelligence have become the most popular research directions in the information field, and jobs in these fields in the job market are also very attractive. Giant leaps in science often come from brilliant ideas and easy-to-use tools, and machine learning is no exception.

Applying machine learning in practice requires a combination of theory and tools. For introductory readers of machine learning, it is difficult to understand the principles and concepts to determine the software packages to be installed. When many people first try machine learning, they will find it really difficult to understand what an algorithm is doing. Not only because of various complicated mathematical theories and difficult symbols in the algorithm, but without practical examples, it is really boring to understand an algorithm by definition and derivation. Even in the relevant guidance materials on the Internet, what can be found are usually various formulas and obscure explanations, and few people can explain all the details in detail.

Therefore, the positioning of the book "Python Machine Learning: Based on PyTorch and Scikit-Learn" is to combine machine learning theory with engineering practice, thereby lowering the reading threshold for readers. From the fundamentals of data-driven methods to the latest deep learning frameworks, each chapter of this book provides machine learning code examples for solving practical machine learning problems.

insert image description here

** "Python Machine Learning: Based on PyTorch and Scikit-Learn"
**
[US] Sebastian Raschka, [US] Liu Yuxi (Hayden),

[US] Wahid Mirjalili

Dmytro Dzhulgakov , one of the "Four Great Masterpieces" of Python Deep Learning, the new PyTorch version
of PyTorch core maintainer

Personally written preface recommended

This book not only introduces the basic principles of the field of machine learning, but also introduces the engineering practice of machine learning . The value of this book is immeasurable, and I hope that this book will inspire readers to use machine learning for their own research fields.

brief introduction

This book is a comprehensive guide to learning machine learning and deep learning in the PyTorch environment . It can be used as an introductory tutorial for beginners , or as a reference book for readers when developing machine learning projects .

With clear explanations and vivid examples, this book provides an in-depth introduction to the basics of machine learning methods. It not only provides instructions for building machine learning models, but also provides basic guidelines for building machine learning models and solving practical problems. This book adds PyTorch-based deep learning content and introduces the new version of Scikit-Learn. This book covers a variety of machine learning and deep learning methods for text and image classification, introducing generative adversarial networks (GANs) for generating new data and reinforcement learning for training agents. Finally, the book also introduces new dynamics in deep learning, including graph neural networks and large transformers for natural language processing (NLP). Whether you are new to machine learning or plan to track the progress of machine learning , this book can be used as the best choice for machine learning with Python.

After finishing this book, you will be able to:

Explore frameworks, models, and methods for machines to "learn" from data. Implement machine learning with Scikit-Learn and deep learning with PyTorch. Train a machine learning classifier to classify data such as images, text, and more. Build and train neural networks, transformers, and graph neural networks. Discover the best ways to evaluate and optimize models. Use regression analysis to predict continuous target outcomes. Dig deep into text and social media data using sentiment analysis.

About the Author

Sebastian Raschka received his Ph.D. from Michigan State University and is now an assistant professor of statistics at the University of Wisconsin-Madison, conducting research in machine learning and deep learning. His research interests are data-constrained few-shot learning and building deep neural networks for predicting ordered target values. He is also an open source contributor as Grid.ai's Chief AI Educator, passionate about disseminating machine learning and AI domain knowledge.

Liu Yuxi (Hayden) [Yuxi (Hayden) Liu] works as a machine learning software engineer at Google and has worked as a machine learning scientist. He is the author of a series of books on machine learning. His first book, Python Machine Learning By Example, was ranked #1 in its category on Amazon in 2017 and 2018 and has been translated into several languages.

Vahid Mirjalili (Vahid Mirjalili) received a double Ph.D. in Mechanical Engineering and Computer Science from Michigan State University. He is a researcher focusing on computer vision and deep learning. Author Sebastian Raschka is great at explaining complex methods and concepts in an easy-to-understand manner. As the deep learning revolution penetrated into various fields, Sebastian Raschka and his team continued to upgrade and improve the content of the book, and successively published the second and third editions. Based on the previous three editions, this book has added some new chapters, including PyTorch-related content, covering Transformer and graph neural networks. These are the current state-of-the-art methods in the field of deep learning, sweeping fields such as text understanding and molecular structure by storm in the past two years. The authors' expertise and experience in solving real-world problems provide an excellent balance of theoretical knowledge and hands-on practical content.

Sebastian Raschka and Vahid Mirjalili have extensive scientific research experience in the fields of computer vision and computational biology.

Yuxi Liu is good at solving practical problems in the field of machine learning, such as applying machine learning methods to event prediction, recommendation systems, etc. The authors of this book are all passionate about education, and they have written this book in easy-to-understand language to meet the needs of readers. PyTorch core maintainer personally recommends "I believe that you can feel that this book is comprehensive and thorough in summarizing machine learning hotspots, and the explanation of machine learning implementation methods is clear and valuable. I hope you can get inspiration from this book so that you can use machine learning methods to solve practical problems." —— Dmytro Dzhulgakov, PyTorch core maintainer

Table of contents

Pull up and down to view the catalog ↓

Translator's Preface

sequence

foreword

About the Author

Reviewer profile

Chapter 1 Empowering Computers to Learn from Data 1

1.1 Intelligent systems that transform data into knowledge1

1.2 Three Types of Machine Learning 2

1.2.1 Supervised learning for predicting the future 2

1.2.2 Reinforcement Learning for Solving Interaction Problems 4

1.2.3 Unsupervised learning to discover hidden patterns in data 5

1.3 Basic terms and symbols 6

1.3.1 Symbols and conventions used in this book 6

1.3.2 Machine Learning Terminology 8

1.4 Roadmap for building machine learning systems 8

1.4.1 Data preprocessing - making data available 8

1.4.2 Training and selecting predictive models 9

1.4.3 Evaluating the model on unseen data10

1.5 Using Python to implement machine learning algorithms 10

1.5.1 Install Python and other packages from the Python Package Index 10

1.5.2 Using Anaconda Python

Package Manager 11

1.5.3 Scientific Computing, Data Science and Machine Learning Software Packages12

1.6 Chapter Summary 13

Chapter 2 Training Simple Machine Learning Classification Algorithms 14

2.1 Artificial neurons - a glimpse into the early history of machine learning 14

2.1.1 Definition of artificial neuron 15

2.1.2 Perceptron Learning Rules 16

2.2 Using Python to implement perceptron learning algorithm 19

2.2.1 Object-Oriented Perceptron API19

2.2.2 Using the iris data set to train the perceptron 22

2.3 Adaptive Linear Neuron and Algorithm Convergence 27

2.3.1 Minimizing the loss function using gradient descent 28

2.3.2 Implementing Adaline30 in Python

2.3.3 Improving Gradient Descent through Feature Scaling 34

2.3.4 Large-Scale Machine Learning and Stochastic Gradient Descent 36

2.4 Chapter Summary 41

XIV

Chapter 3 A Tour of Scikit-Learn Machine Learning Classification Algorithms 42

3.1 Selection of classification algorithms 42

3.2 The first step in learning Scikit-Learn - training perceptron 43

3.3 Modeling Classification Probabilities with the Logistic Regression Algorithm 48

3.3.1 Logistic regression and conditional probability 48

3.3.2 Updating model weights with a logistic loss function 51

3.3.3 From the code implementation of Adaline to the code implementation of logistic regression 53

3.3.4 Training Logistic Regression Model with Scikit-Learn 56

3.3.5 Using regularization to avoid model overfitting 59

3.4 Support Vector Machines Based on Maximum Classification Interval 62

3.4.1 Understanding the maximum class interval 62

3.4.2 Solving nonlinearly separable problems using slack variables 62

3.4.3 Another implementation in Scikit-Learn 64

3.5 Solving Nonlinear Problems Using Kernel Support Vector Machines 64

3.5.1 Kernel methods for linearly inseparable data 64

3.5.2 Using Kernel Methods to Find Separating Hyperplanes in High Dimensions 66

3.6 Decision tree learning 69

3.6.1 Maximizing Information Gain 70

3.6.2 Building a decision tree 73

3.6.3 Random forest composed of multiple decision trees 76

3.7 K-Nearest Neighbor Algorithm Based on Lazy Learning Strategy 78

3.8 Chapter Summary 81

Chapter 4 Building a Good Training Dataset—Data Preprocessing 83

4.1 Handling missing values ​​83

4.1.1 Identifying missing values ​​in tabular data 83

4.1.2 Delete samples or features with missing values ​​85

4.1.3 Filling missing values ​​85

4.1.4 Estimators in Scikit-Learn 86

4.2 Handling categorical data 87

4.2.1 Encoding categorical data with pandas 88

4.2.2 Mapping ordered features 88

4.2.3 Class label encoding 89

4.2.4 One-hot encoding of nominal features 90

4.3 Divide the dataset into training and testing datasets 93

4.4 Making features have the same scale 95

4.5 Selecting meaningful features 97

4.5.1 Penalizing model complexity with L1 and L2 regularization 98

4.5.2 A geometric interpretation of L2 regularization 98

4.5.3 L1 regularization and sparse solutions 99

4.5.4 Sequential feature selection algorithm 102

4.6 Assessing feature importance with random forests 107

4.7 Chapter Summary 109

Chapter 5 Compressing Data by Dimensionality Reduction Methods 110

5.1 Principal Component Analysis Method for Unsupervised Dimensionality Reduction 110

5.1.1 The main steps of principal component analysis 110

5.1.2 Steps of extracting principal components 112

5.1.3 Total variance and explained variance 114

5.1.4 Feature Transformation 115

5.1.5 Implementing Principal Component Analysis with Scikit-Learn 118

5.1.6 Evaluating the contribution of features 120

5.2 Linear Discriminant Analysis Methods for Supervised Data Compression 122

5.2.1 Principal component analysis and linear discriminant analysis 122

5.2.2 Fundamentals of linear discriminant analysis 123

5.2.3 Computing the scatter matrix 124

5.2.4 Choosing a linear discriminant for a new feature subspace 126

5.2.5 Projecting samples to a new feature space 128

5.2.6 Linear Discriminant Analysis with Scikit-Learn 128

5.3 Nonlinear Dimensionality Reduction and Visualization 130

5.3.1 Insufficiencies of nonlinear dimensionality reduction 130

5.3.2 Visualizing data using t-SNE 131

5.4 Chapter Summary 135

XV

Chapter 6 Best Practices for Model Evaluation and Hyperparameter Tuning 136

6.1 Use the pipeline method to simplify the workflow 136

6.1.1 Loading the Wisconsin Breast Cancer Dataset 136

6.1.2 Integrating transformers and estimators in the pipeline 138

6.2 Evaluating model performance using k-fold cross-validation 140

6.2.1 Holdout cross-validation 140

6.2.2 k-fold cross-validation 140

6.3 Tuning Algorithms with Learning and Validation Curves 144

6.3.1 Using Learning Curves to Address Bias and Variance 144

6.3.2 Using Validation Curves to Solve Overfitting and Underfitting 146

6.4 Fine-tuning Machine Learning Models via Grid Search 148

6.4.1 Tuning Hyperparameters via Grid Search 148

6.4.2 More extensive exploration of configurations of hyperparameters via random search 149

6.4.3 Search algorithm for successive halving hyperparameters 151

6.4.4 Nested cross-validation 153

6.5 Model Performance Evaluation Metrics 154

6.5.1 Confusion Matrix 155

6.5.2 Precision and recall 156

6.5.3 Plotting the ROC curve 158

6.5.4 Evaluation metrics for multiple classifiers 160

6.5.5 Dealing with Class Imbalance Problems 161

6.6 Chapter Summary 163

XVI

Chapter 7 Ensemble Learning Combining Different Models 164

7.1 Ensemble Learning 164

7.2 Combining Classifiers by Supermajority Voting 167

7.2.1 Implementing a simple ensemble classifier based on supermajority voting 167

7.2.2 Prediction using the absolute majority voting principle 171

7.2.3 Evaluating and tuning ensemble classifiers 173

7.3 bagging - building an ensemble classifier based on bootstrap samples 179

7.3.1 Introduction to bagging 179

7.3.2 Using bagging to classify samples in the wine dataset 180

7.4 Improving the performance of weak learners through adaptive boosting 184

7.4.1 How boosting works 184

7.4.2 Implementing AdaBoost188 with Scikit-Learn

7.5 Gradient boosting - training an ensemble classifier based on loss gradients 191

7.5.1 Comparing AdaBoost with gradient boosting191

7.5.2 General overview of gradient boosting algorithms 191

7.5.3 Interpreting the gradient boosting algorithm for classification 193

7.5.4 An example of classification with gradient boosting 194

7.5.5 Using XGBoost196

7.6 Chapter Summary 197

Chapter 8 Sentiment Analysis with Machine Learning 198

8.1 Text processing of IMDb movie review data 198

8.1.1 Get movie review dataset 199

8.1.2 Preprocessing the movie review dataset into a more usable format 199

8.2 Bag-of-Words Model 201

8.2.1 Converting words to feature vectors 201

8.2.2 Evaluating Word Relevance by Term Frequency-Inverse Document Frequency 203

8.2.3 Text Data Cleaning 204

8.2.4 Process documents into token206

8.3 Training a Logistic Regression Model for Document Classification 208

8.4 Handling Larger Data—Online Algorithms

and Out-of-Core Learning Methods 210

8.5 Implementing Topics with Latent Dirichlet Allocation

Modeling 213

8.5.1 Decomposing text using LDA

Document 214

8.5.2 Implementation with Scikit-Learn

LDA214

8.6 Chapter Summary 217

Chapter 9 Predicting Continuous Target Variables

Regression Analysis 218

9.1 Introduction to Linear Regression 218

9.1.1 Simple linear regression 218

9.1.2 Multiple Linear Regression 219

9.2 Exploring the Ames Housing Dataset 220

9.2.1 Loading the Ames housing dataset

to DataFrame 220

9.2.2 The Importance of Visualizing Datasets

Features 222

9.2.3 Viewing with Correlation Matrix

Relevance 223

9.3 The Least Squares Linear Regression Model

achieve 225

9.3.1 Solving Regression Using Gradient Descent

parameter 225

9.3.2 Regression Estimation with Scikit-Learn

Model Coefficients 229

9.4 Fitting Robust Regression Using RANSAC

Model 231

9.5 Evaluating the performance of linear regression models 233

9.6 Regression using regularization methods 237

9.7 Transforming a linear regression model into a curve—

Polynomial Regression 238

9.7.1 Use Scikit-Learn to add

Polynomial Term 239

9.7.2 Modeling Ames Housing Data

Concentrated Nonlinear Relationships 240

9.8 Handling Nonlinearity Using Random Forests

relationship 243

9.8.1 Decision Tree Regression 243

9.8.2 Random Forest Regression 245

9.9 Chapter Summary 247

XVII

Chapter 10 Dealing with Unlabeled Data

Cluster Analysis 248

10.1 Grouping samples using the k-means algorithm 248

10.1.1 Implementation with Scikit-Learn

k-means clustering 248

10.1.2 k-means++ - smarter

Cluster Initialization Methods 252

10.1.3 Hard and soft clustering 253

10.1.4 Using the Elbow Method to Solve Optimal Clusters

Quantity 255

10.1.5 Quantifying Clusters via Contour Plots

Quality 255

10.2 Organizing clusters into hierarchical trees 260

10.2.1 Bottom-up clustering 260

10.2.2 Stratification on a distance matrix

Clustering 262

10.2.3 Heat map and dendrogram

combine 265

10.2.4 Through Scikit-Learn

Agglomerative Clustering 266

10.3 Locating High Density via DBSCAN

Area 267

10.4 Chapter Summary 272

XVIII

Chapter 11 Implementing a Multilayer Artificial Neural Network from Scratch 273

11.1 Building Complex Functions with Artificial Neural Networks

Model 273

11.1.1 Single-layer neural networks 274

11.1.2 Multilayer Neural Network Architecture 275

11.1.3 Activating Neural Networks Using Forward Propagation

Network 277

11.2 Recognizing Handwritten Digits 279

11.2.1 Acquiring and preparing MNIST

Dataset 279

11.2.2 Implementing a multilayer perceptron 282

11.2.3 Neural Network Training Code 287

11.2.4 Evaluating Neural Networks

Performance 291

11.3 Training Artificial Neural Networks 295

11.3.1 Calculation of the loss function 295

11.3.2 Understanding backpropagation 296

11.3.3 Training via backpropagation

Neural Networks 297

11.4 On the Convergence of Neural Networks 300

11.5 Final Notes on Neural Network Implementations

few words 300

11.6 Chapter Summary 301

Chapter 12 Parallel Training with PyTorch

Neural Networks 302

12.1 PyTorch and model training performance 302

12.1.1 Performance challenges 302

12.1.2 What is PyTorch303

12.1.3 How to learn PyTorch304

12.2 First steps in learning PyTorch 304

12.2.1 Installing PyTorch305

12.2.2 Create in PyTorch

Tensor 306

12.2.3 For tensor shapes and data

Type 307 for operation

12.2.4 Mathematical operations on tensors 307

12.2.5 Splitting, stacking and joining

Tensor 309

12.3 Building the input in PyTorch

pipeline310

12.3.1 Create using an existing tensor

PyTorch DataLoader

311

12.3.2 Combining two tensors into

A joint data set 311

12.3.3 Out-of-order, batch and

repeat 313

12.3.4 Using files stored on the local hard disk

File Creation Dataset 314

12.3.5 从torchvision.datasets

Obtaining Datasets from the Library 318

12.4 Building a neural network model in PyTorch 321

12.4.1 The PyTorch Neural Network Module 322

12.4.2 Building a linear regression model 322

12.4.3 Using torch.nn and

torch.optim module

Training Model 325

12.4.4 Building a multi-layer perceptron to classify the iris data set 326

12.4.5 Evaluating the trained model on the test dataset 329

12.4.6 Saving and reloading a trained model 329

12.5 Choosing an activation function for a multilayer neural network 330

12.5.1 Review of logistic functions 331

12.5.2 Using softmax function to estimate multiclass

Class Probability 332

12.5.3 Widening the output range using the hyperbolic tangent function 333

12.5.4 Rectified Linear Unit 335

12.6 Chapter Summary 337

Chapter 13 Dives Into PyTorch

How It Works 338

13.1 Main features of PyTorch 338

13.2 Computational graphs in PyTorch 339

13.2.1 Understanding Computational Graphs 339

13.2.2 Creating computational graphs in PyTorch 339

13.3 For storing and updating model parameters

PyTorch tensors 340

13.4 Computing Gradients by Automatic Differentiation 342

13.4.1 Computing the loss function about

Gradients of Differentiable Variables 342

13.4.2 Automatic differentiation 343

13.4.3 Adversarial examples 344

13.5 Using the torch.nn module to simplify common

Structure 344

13.5.1 Using nn.Sequential

Implementing Models 344

13.5.2 Choosing a loss function 345

13.5.3 Solving XOR sorting

Question 346

13.5.4 Flexible use of nn.Module

build model 350

13.5.5 Writing in PyTorch

Custom Layers 352

13.6 Project 1: Predicting a Car's Fuel

Efficiency 356

13.6.1 Using feature columns 357

13.6.2 Training DNN Regression

model 360

13.7 Project 2: Classifying MNIST Handwriting

number 362

XIX

13.8 Advanced PyTorch API: PyTorch

Introduction to Lightning 364

13.8.1 Building PyTorch Lightning

Model 365

13.8.2 Setting data for Lightning

loader 367

13.8.3 Using PyTorch Lightning¶

Trainer class training model 369

13.8.4 Evaluation with TensorBoard

model 370

13.9 Chapter Summary 373

Chapter 14 Using Deep Convolutional Neural Networks

Classifying Images 374

14.1 Building Blocks of Convolutional Neural Networks 374

14.1.1 Understanding Convolutional Neural Networks and

Hierarchical Features 375

14.1.2 Discrete Convolution 376

14.1.3 Downsampling layers 383

XX

14.2 Building Convolutional Neural Networks 385

14.2.1 Handling multiple input channels 385

14.2.2 Using the L2 norm and dropout

Regularizing Neural Networks 388

14.2.3 Loss for classification tasks

function 390

14.3 Depthwise convolution using PyTorch

Neural Networks 392

14.3.1 Multilayer Convolutional Neural Networks

Structure 392

14.3.2 Data loading and preprocessing 393

14.3.3 Using the torch.nn module

Implementing Convolutional Neural Networks 394

14.4 Using Convolutional Neural Networks to Face Images

Perform smile classification 400

14.4.1 Load CelebA

Dataset 400

14.4.2 Image conversion and data

augmented 401

14.4.3 Training Convolutional Neural Networks

Smile Classifier 407

14.5 Chapter Summary 413

Chapter 15 Sequences with Recurrent Neural Networks

Data Modeling 415

15.1 Sequence data 415

15.1.1 Modeling sequence data 415

15.1.2 Series data and time series

Data 416

15.1.3 Representation of sequence data 416

15.1.4 Sequence modeling methods 417

15.2 Loops for Modeling Sequence Data

Neural Networks 418

15.2.1 Cycles in Recurrent Neural Networks

Mechanism 418

15.2.2 Recurrent Neural Network Activation Values

Calculate 419

15.2.3 Hidden layer loop and output layer

Loop 421

15.2.4 Disadvantages of distance learning

Question 424

15.2.5 Long Short-Term Memory Networks 425

15.3 Implementing recurrent neural in PyTorch

Network 426

15.3.1 Project 1: Based on IMDb

Sentiment Analysis of Film Criticism 427

15.3.2 Project 2: In PyTorch

character-level language

Modeling 437

15.4 Chapter Summary 448

Chapter 16 Transformers: Exploitation

Attention mechanism improves nature

Language Processing Effects 449

16.1 Recurrent Neural Networks with Attention

network449

16.1.1 Helping RNN Acquisition

Attention Mechanisms for Information 450

16.1.2 Initial in RNNs

Attention Mechanism 451

16.1.3 Using Bidirectional Recurrent Neural Networks

Model Processing Input Data 452

16.1.4 Obtaining output from the context vector 452

16.1.5 Computing attention weights 453

16.2 Self-attention mechanism 453

16.2.1 The basic form of self-attention mechanism 454

16.2.2 Parameterization of the self-attention mechanism:

Scaled Dot Product Attention 457

16.3 Attention is the only thing needed: the initial

transformer460

16.3.1 Encoding contextual embedding vectors via multi-head attention 460

16.3.2 Learning Language Models: Decoder and Mask Multi-Head Attention 464

16.3.3 Implementation details: position encoding and layer normalization 465

16.4 Building large language models from unlabeled data 467

16.4.1 Pretraining and fine-tuning the transformer model 467

16.4.2 GPT models using unlabeled data 469

16.4.3 Generating new text using GPT-2 471

16.4.4 Bidirectional Pretrained BERT Model 474

16.4.5 The best of both worlds model: BART476

16.5 Fine-tuning the BERT model with PyTorch 478

16.5.1 Loading the IMDb movie review dataset 479

16.5.2 Dataset segmentation 481

16.5.3 Loading and fine-tuning the pretrained BERT model 482

16.5.4 Fine-tuning the transformer using the Trainer API486

16.6 Chapter Summary 489

XXI

Chapter 17 Generation of New Data for Synthesis

Adversarial Networks 491

17.1 Generative Adversarial Networks 491

17.1.1 Autoencoders 492

17.1.2 Generation of new data for synthesis

Model 493

17.1.3 Generating with Generative Adversarial Networks

new sample 494

17.1.4 Understanding Generative Adversarial Network Models

Medium Generator and Discriminator Networks

The loss function of 495

17.2 Implementing Generative Adversarial Networks from Scratch 497

17.2.1 Use Google Colab to train and generate

Adversarial Network Models 497

17.2.2 Implementing generator and discriminator networks 499

17.2.3 Defining the training dataset 502

17.2.4 Training Generative Adversarial Network Model 504

17.3 Improving the Quality of Generated Images with Convolutional GANs and Wasserstein GANs 510

17.3.1 Transposed convolution 510

17.3.2 Batch normalization 511

17.3.3 Implementing generators and discriminators 513

17.3.4 A measure of the degree of difference between two distributions 520

17.3.5 Using EM distance in practice with generative adversarial networks 523

17.3.6 Gradient penalty 523

17.3.7 Implementing Deep Convolutional Generative Adversarial Networks Using WGANGP 524

17.3.8 Mode collapse 527

17.4 Other Generative Adversarial Network Applications 529

17.5 Chapter Summary 530

XXII

Chapter 18 for capturing graph data relationships

Graph Neural Networks 531

18.1 Introduction to graph data 531

18.1.1 Undirected graphs 532

18.1.2 Directed graphs 533

18.1.3 Labeled graphs 533

18.1.4 Representing molecular structures

for Figure 533

18.2 Understanding graph convolutions 534

18.2.1 Fundamentals of graph convolution 534

18.2.2 Implementing a basic graph convolution

function 536

18.3 Implementing graph neural from scratch with PyTorch

Network 540

18.3.1 Define a NodeNetwork

Model 540

18.3.2 NodeNetwork graph convolution layer

Code 541

18.3.3 Adding a global pooling layer

Handle graphs of different sizes 542

18.3.4 Prepare the data loading tool 545

18.3.5 Using NodeNetwork

Forecast 548

18.3.6 Using PyTorch Geometric

Libraries Implementing Graph Neural Networks 548

18.4 Other graph neural network layers and the latest

Progress 554

18.4.1 Spectral Convolution 554

18.4.2 Pooling 555

18.4.3 Data normalization 556

18.4.4 Graph Neural Networks Literature 557

18.5 Chapter Summary 559

Chapter 19: Making Decisions in Complex Environments

Reinforcement Learning 560

19.1 An overview of learning from experience 560

19.1.1 Understanding Reinforcement Learning 561

19.1.2 Agents and environments 562

19.2 Theoretical Foundations of Reinforcement Learning 563

19.2.1 Markov decision processes 563

19.2.2 Periodic tasks and persistence

task 566

19.2.3 Reinforcement Learning Terminology 566

19.2.4 Dynamics Using the Bellman Equations

Planning 569

19.3 Reinforcement Learning Algorithms 569

19.3.1 Dynamic programming 570

19.3.2 Monte Carlo Reinforcement Learning 572

19.3.3 Temporal Difference Learning 573

19.4 Implementing the first reinforcement learning algorithm 575

19.4.1 Introduction to OpenAI Gym Toolkit 575

19.4.2 Solving grid world problems with Q-learning 584

19.5 An Overview of Deep Q-Learning 588

19.5.1 Training a Deep Q-Learning Network Model 589

19.5.2 Implementing the Deep Q-Learning Algorithm 591

19.6 Chapter Summary and Book Summary 595

The second phase of the book delivery activity, just comment "Life is short, I learn python" to participate in the activity, the list will be announced in the dynamics and private messages and the winners will be notified

Guess you like

Origin blog.csdn.net/2202_75623950/article/details/131885972