I need to implement a custom loss function for a regression problem. The data that I have are one dimensional and each has a length of 1000. In total I have 100.000 (samples) of these 1000 (features). Each sample (which is a data of length 1000) is connected to 4 different parameters (a,b,m,s). The connection between the samples and parameters are known for the training data as usually.

I need to develop a deep learning model which will be able to output 4 parameters (a2,b2,m2,s2) given a vector of length 1000. The custom function that I need to define on Pytorch is then

```
loss =Integral (function(a,b,m,s,a2,b2,m2,s2))
```

The problem that I have is that I am not able to calculate this loss function inside “torch” framework using for example “torch.quad”, because such code does not exist in Pytorch. I decided to define a custom loss function using “quad” but this returns a float number. After I get this float number I can convert it to tensor. But the problem is that I am doing this at each iteration of the loop and the gradients are not evolving in time, rather they are initialized at each iteration again and again.

Here is the portion of my code (it is modified from1D convolutional nets):

```
def train(ep):
model.train()
total_loss = 0
count = 0
train_idx_list = np.arange(len(X_train), dtype="int32")
np.random.shuffle(train_idx_list)
for idx in train_idx_list:
data_line = X_train[idx]
x = Variable(data_line)
if args.cuda:
x = x.cuda()
optimizer.zero_grad()
output = model(x.unsqueeze(0)).squeeze(0)
a = (parameter1[idx]*40.0000)-20.0000
b = parameter2[idx]*10
m= parameter3[idx]*4.8063e-07
s= parameter4[idx]*3.8284e-07
a2=(output[0][0].item()*40.0000)-20.0000
b2=output[0][1].item()*10
m2=output[0][2].item()*4.8063e-07
s2=output[0][3].item()*3.8284e-07
loss = torch.tensor(quad(integrand, 0, 5*10**-7, args=(a[0],b[0],m[0],s[0],a2,b2,m2,s2), points=(max(0,min(m[0]-2*s[0],m2-2*s2)), max(m[0]+2*s[0],m2+2*s2)))[0], requires_grad=True, device=cuda0)
total_loss += loss.item()
count +=1# output.size(0)
if args.clip > 0:
torch.nn.utils.clip_grad_norm_(model.parameters(), args.clip)
loss.backward()
optimizer.step()
if idx > 0 and idx % args.log_interval == 0:
cur_loss = total_loss / count
print("Epoch {:2d} | lr {:.5f} | loss {:.5f}".format(ep, lr, cur_loss))
total_loss = 0.0
count = 0
```

The integrand is defined as follows:

```
def integrand(y,a,b,m,s,a2,b2,m2,s2):
return abs(((1/2)*b*math.exp(1)**((-1)*(s**(-1))**b*abs((-1)*m+y)**b)*s**(-1)*math.erfc((-1)*2**(-1/2)*a*s**(-1)*((-1)*m+y))*math.gamma(b**(-1))**(-1))-((1/2)*b2*math.exp(1)**((-1)*(s2**(-1))**b2*abs((-1)*m2+y)**b2)*s2**(-1)*math.erfc((-1)*2**(-1/2)*a2*s2**(-1)*((-1)*m2+y))*math.gamma(b2**(-1))**(-1)));
```

How can one deal with such a problem?