Chapter 7 Exercises
Chapter 7 Exercises
Chapter 7 Exercises
3. Let k be the number of distinct variables per clause (in your problem, k =
3).
The key insight is that we can create at most C(n,k) different clauses, where
C(n,k) is the binomial coefficient "n choose k" - the number of ways to
choose k items from a set of n items.
C(n,k) = n! / (k! × (n-k)!)
- We need C(n,k) ≥ m
- We need m = 5 clauses
Let's check:
Note: This calculation gives you the theoretical minimum if you're just
concerned with having enough distinct combinations of variables. In practice,
to create an unsatisfiable formula, you also need to consider how the literals
(variables or their negations) are arranged.
Let's go step by step to clearly explain why at most 20 out of 32 assignments can be eliminated
when we have five 3-SAT clauses.
Step 1: Understanding the Total Assignments
25=322^5 = 32
These 32 assignments represent every possible way the 5 variables can be assigned True
or False.
This means at least one of the literals must be True for the clause to be satisfied.
23=82^3 = 8
Out of these 8 assignments, only 1 makes the clause False (where all three literals are
False).
Thus, a single 3-SAT clause eliminates 1 assignment out of every 8 possible
assignments of its three variables.
Now, consider 5 such clauses, each involving different sets of three variables.
Each clause removes 4 assignments from the total 32.
Why 4?
If all 5 clauses were independent (i.e., they didn’t overlap in eliminating assignments),
they would each eliminate 4 unique assignments.
This means the maximum number of assignments eliminated is:
5×4=205 \times 4 = 20
32−20=1232 - 20 = 12
The assumption that each clause removes a completely new set of 4 assignments may
not always hold.
Some of the eliminated assignments may overlap between different clauses.
This means in reality, fewer than 20 assignments might actually be removed.
But the key takeaway remains the same: Not all 32 assignments are eliminated, so at
least one satisfying assignment always remains.
Final Answer
Would you like an example set of 5 clauses to confirm this with an actual truth table? 😊
Minimal Number of 3-SAT Clauses That Make the Problem Unsatisfiable (With
5 Variables)
We need to determine the smallest number of 3-SAT clauses (each involving 3 distinct
variables) that can create an unsatisfiable formula while keeping the number of variables fixed
at 5.
∨E)∧(¬C∨¬D∨¬E)(A \vee B \vee C) \wedge (\neg A \vee \neg B \vee C) \wedge (A \vee \neg
(A∨B∨C)∧(¬A∨¬B∨C)∧(A∨¬B∨¬C)∧(¬A∨B∨¬C)∧(A∨B∨¬D)∧(¬A∨¬B∨¬D)∧(C∨D
B \vee \neg C) \wedge (\neg A \vee B \vee \neg C) \wedge (A \vee B \vee \neg D) \wedge (\neg
A \vee \neg B \vee \neg D) \wedge (C \vee D \vee E) \wedge (\neg C \vee \neg D \vee \neg E)
✅ Thus, the minimal number of 3-SAT clauses that makes a formula unsatisfiable with 5
variables is 8.
b. Two clauses are semantically distinct if they are not logically equivalent.
How many
For each of the n variables, there is exactly 1 tautological clause that can be
formed, giving n tautologies total.
n(2n-1) - n = 2n² - 2n
When training a linear regression model, we adjust the weights (w0w_0 and w1w_1) to
minimize a loss function, usually the Mean Squared Error (MSE). The weight space refers to
all possible values of these weights. In univariate linear regression (one feature, one output), the
weight space is two-dimensional (w0,w1w_0, w_1), and the loss function can be visualized as a
surface in three-dimensional space (one axis for w0w_0, one for w1w_1, and the vertical axis
for the loss value).
Convexity of the Loss Function
The loss function for linear regression using L2 loss (MSE) is always convex.
A convex function means it has a bowl-shaped surface.
The gradient descent algorithm follows the slope of this function to find the lowest point
(global minimum).
Since a convex function has only one minimum (the global minimum), there are no
local minima to get stuck in.
A local minimum is a point where the function has a smaller value than neighboring points but
is not the lowest point overall. In non-convex loss functions (such as deep neural networks),
gradient descent can get stuck in these local minima, leading to suboptimal solutions. However,
in linear regression:
1. Predictability: Since there’s only one global minimum, training a linear regression
model is straightforward.
2. Efficient Optimization: Gradient descent or other optimization techniques will always
converge to the best solution.
3. No Need for Advanced Optimization Tricks: Unlike deep learning, where techniques
like learning rate scheduling or momentum help escape local minima, linear regression
does not face this issue.