HDU6651 Final Exam - 贪心法

Final Exam

Time Limit: 4000/2000 MS (Java/Others)   Memory Limit: 524288/524288 K (Java/Others)
Total Submission(s): 462   Accepted Submission(s): 199

Problem Description

Final Exam is coming! Cuber QQ has now one night to prepare for tomorrow’s exam.

The exam will be a exam of problems sharing altogether m points. Cuber QQ doesn’t know about the exact distribution. Of course, different problems might have different points; in some extreme cases, some problems might worth 0 points, or all m points. Points must be integers; a problem cannot have 0.5 point.

What he knows, is that, these n problems will be about n totally different topics. For example, one could be testing your understanding of Dynamic Programming, another might be about history of China in 19th century. So he has to divide your night to prepare each of these topics separately. Also, if one problem is worth x points in tomorrow’s exam, it takes at least x+1 hours to prepare everything you need for examination. If he spends less than x+1 hours preparing, he shall fail at this problem.

Cuber QQ’s goal, strangely, is not to take as much points as possible, but to solve at least k problems no matter how the examination paper looks like, to get away from his parents’ scoldings. So he wonders how many hours at least he needs to achieve this goal.

Input

The first line of the input is an integer t (1≤t≤20 000), denoting the number of test cases.

Each test case are three space-separated integers n,m,k (0≤m≤109, 1≤k≤n≤109).

Output

For each test case, output the number of hours Cuber QQ needs.

Sample Input

2
1 10 1
10 109 10

Sample Output

11
1100

Hint

Cuber QQ should solve one problem in sample 1, so he at least prepares 11 hours when the problem one is 10 point.
Cuber QQ should solve all the ten problems in sample 2, so he at least prepares 110 hours for each problem because there may be one problem is 109 point.

扫描二维码关注公众号,回复: 11164615 查看本文章

 
 

题目大概意思:

给出三个整数 n , m , k ( 0 m 1 0 9 , 1 k n 1 0 9 ) n,m,k(0≤m≤10^9,1≤k≤n≤10^9) ,表示即将进行的考试试卷中有 n n 种不相关的知识点,总分为 m m ,假设试卷中第 i i 种知识点的分值为 x i ( 0 x i , i = 1 n x i = m ) x_i(0≤x_i,\sum_{i=1}^{n}{x_i=m}) ,则至少花费 x i + 1 x_i+1 的时间复习这个知识点才可以拿到这一知识点的分数,否则无法获得这一知识点的分数。求一个花费时间最小的复习策略 ( y i ) (y_i) ,使得对任意的试卷的知识点分值分配情况 ( x i ) (x_i) ,都可以保证拿到 k k 个知识点的分数,并输出最小花费。

 
 

分析:

首先考虑这一问题:对于一个复习策略 ( y i ) (y_i) ,找到最小的能够使得在使用策略 ( y i ) (y_i) 时无法获得 k k 个知识点的分数的考试分值分配方法 ( x i ) (x_i) 。那么一定是贪心地从 ( y i ) (y_i) 中复习时间最短的知识点开始分配分值,使得 x i = y i x_i=y_i (即使得第 i i 个知识点无法获得分数的最小分值) ,直到已经有 ( n k + 1 ) (n-k+1) 个知识点无法获得分数为止,剩余的知识点都只分配 0 0 分值。

因此对于同样的复习时间 y i \sum{y_i} ,应尽可能地让 ( y i ) (y_i) 更加平均,才会使得复习策略能够应对的考试总分更大。所以复习策略 ( y i ) (y_i) 中元素的取值应为 y s y_s y s + 1 y_s+1 两者之一。有了这个规律,就很容易构造出最优解了:

对于给定的 n , m , k n,m,k ,设:

使得复习策略失败最少需要挂掉的知识点个数 f a i l = n k + 1 fail=n-k+1 ,再设 l o w = m f a i l , u p = l o w + 1 low=\lfloor\frac{m}{fail}\rfloor,up=low+1 .

那么 m m 可以表示为:
m = l o w × c n t l o w + u p × c n t u p m=low×cnt_{low}+up×cnt_{up} 其中: c n t u p = m l o w × f a i l ,   c n t l o w = f a i l c n t u p cnt_{up}=m-low×fail,\,cnt_{low}=fail-cnt_{up}

那么对于如下一个解:

y 1 = l o w , y 2 = l o w , , y c n t l o w 1 = l o w , y c n t l o w = u p , y c n t l o w + 1 = u p , , y n = u p y_1=low,y_2=low,\dots,y_{cnt_{low}-1}=low,y_{cnt_{low}}=up,y_{cnt_{low+1}}=up,\dots,y_n=up

即由 ( c n t l o w 1 ) (cnt_{low}-1) l o w low ( n ( c n t l o w 1 ) ) (n-(cnt_{low}-1)) u p up 组成的复习策略 ( y i ) (y_i) ,对于分值为 m m 的考试是能够保证得到 k k 个知识点的分数的。证明:

采用前文的贪心策略构造 ( x i ) (x_i) ,每次选取 ( y i ) (y_i) 中最小的复习时间,选取 f a i l fail 次,并每次赋 x i = y i x_i=y_i ,并将剩余的 x i x_i 赋为0,则最终相当于选取了 m = x i = ( c n t l o w 1 ) × l o w + ( c n t u p + 1 ) × u p = l o w × c n t l o w + u p × c n t u p + 1 = m + 1 > m m'=\sum{x_i}=(cnt_{low}-1)×low+(cnt_{up}+1)×up=low×cnt_{low}+up×cnt_{up}+1=m+1>m 的分数,因此想要使这一复习策略失败,至少需要 m + 1 m+1 的分数,因此这一复习策略可以应付所有分数为 m m 的考试。而对于花费减小 1 1 的复习策略 ( y i ) (y'_i) ,即 y i = ( y i ) 1 \sum{y'_i=(\sum{y_i})-1} 的策略 ,则易证在使用同样的贪心策略构造下,会在分数为 m m 的考试下失败,因此,花费减小 1 1 会使解不成立,则 ( y i ) (y_i) 即为所求。

由于只进行了常数次运算,故算法的时间复杂度为 O ( 1 ) O(1) .

 
 
下面贴代码:

#include <cstdio>
using namespace std;

typedef long long ll;


int main()
{
	ll n, m, k;
	int t;
	scanf("%d", &t);

	while (t--)
	{
		scanf("%lld%lld%lld", &n, &m, &k);

		ll fail = n - k + 1;
		ll low = m / fail;
		ll up = low + 1;
		ll cnt_up = m - low * fail;
		ll cnt_low = fail - cnt_up;

		ll ans = low * (cnt_low - 1) + up * (n - (cnt_low - 1));

		printf("%lld\n", ans);
	}
	return 0;
}

原创文章 42 获赞 22 访问量 3031

猜你喜欢

转载自blog.csdn.net/weixin_44327262/article/details/99326110